report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The CARE Act was enacted in 1990 to respond to the needs of individuals and families living with HIV or AIDS and to direct federal funding to areas disproportionately affected by the epidemic. The Ryan White CARE Act Amendments of 1996 and the Ryan White CARE Act Amendments of 2000 modified the original funding formulas. For example, prior to the 1996 amendments, the CARE Act required that for purposes of determining grant amounts a metropolitan area’s caseload be measured by a cumulative count of AIDS cases recorded in the jurisdiction since reporting began in 1981. The 1996 amendments required the use of ELCs instead of cumulative AIDS cases. Because this switch would have resulted in large shifts of funding away from jurisdictions with a longer history of the disease than other jurisdictions, due in part to a higher proportion of deceased cases, the 1996 CARE Act amendments added a hold-harmless provision under Title I, as well as under Title II, that limits the extent to which a grantee’s funding can decline from one year to the next. Titles I and II also provide for other grants to subsets of eligible jurisdictions either by formula or by a competitive process. For example, in addition to AIDS Drug Assistance Program (ADAP) base grants, Title II also authorizes grants for states and certain territories with demonstrated need for additional funding to support their ADAPs. These grants, known as Severe Need grants, are funded through a set-aside of funds otherwise available for ADAP base grants. Title II also authorizes funding for “Emerging Communities,” which are communities affected by AIDS that have not had a sufficient number of AIDS cases reported in the last 5 calendar years to be eligible for Title I grants as EMAs. In addition, Title II contains a minimum-grant provision that guarantees that no grantee will receive a Title II base grant less than a specified funding amount. Metropolitan areas heavily affected by HIV/AIDS have always been recognized within the structure of the CARE Act. In 1995 we reported that, with combined funding under Title I and Title II, states with EMAs receive more funding per AIDS case than states without EMAs. To adjust for this situation, the 1996 amendments instituted a two-part formula for Title II base grants that takes into account the number of ELCs that reside within a state but outside of any EMA. Under this distribution formula, 80 percent of the Title II base grant is based upon a state’s proportion of all ELCs, and 20 percent of the base grant is based on a state’s proportion of ELCs outside of EMAs relative to all such ELCs in all states and territories. A second provision included in 1996 protected the eligibility of EMAs. The 1996 amendments provided that a jurisdiction designated as an EMA for that fiscal year would be “grandfathered” so it would continue to receive Title I funding even if its reported number of AIDS cases dropped below the threshold for eligibility. Table 1 describes CARE Act formula grants for Titles I and II. The 2000 amendments provided for HIV case counts to be incorporated in the Title I and Title II funding formulas as early as fiscal year 2005 if such data were available and deemed “sufficiently accurate and reliable” by the Secretary of Health and Human Services. They also required that HIV data be used no later than the beginning of fiscal year 2007. In June 2004 the Secretary of Health and Human Services determined that HIV data were not yet ready to be used for the purposes of distributing formula funding under Title I and Title II of the CARE Act. Provisions in the CARE Act funding formulas result in a distribution of funds among grantees that does not reflect the relative distribution of AIDS cases in these jurisdictions. We found that provisions affect the proportional allocation of funding as follows: (1) the AIDS case-count provisions in the CARE Act result in a distribution of funding that is not reflective of the distribution of persons living with AIDS, (2) CARE Act provisions related to metropolitan areas result in variability in the amounts of funding per ELC among grantees, and (3) the CARE Act hold- harmless provisions and grandfather clause protect the funding of certain grantees. Provisions in the CARE Act use measurements of AIDS cases that do not reflect an accurate count of people currently living with AIDS. Eligibility for Title I funding and Title II Emerging Communities grants, as well as the amounts of the Emerging Communities grants, is based on cumulative totals of AIDS cases reported in the most recent 5-year period. This results in funding not being distributed according to the current distribution of the disease. For example, because Emerging Communities funding is determined by using 5-year cumulative case counts, allocations could be based in part on deceased cases, that is, people for whom AIDS was reported in the past 5 years but who have since died. In addition, these case counts do not take into account living cases in which AIDS was diagnosed more than 5 years earlier. Consequently, 5-year cumulative case counts can substantially misrepresent the number of AIDS patients in these communities. The use of ELCs as provided for in the CARE Act can also lead to inaccurate estimates of living AIDS cases. Currently, Title I, Title II, and ADAP base funding, which constitute the majority of formula funding, are distributed according to ELCs. ELCs are an estimate of living AIDS cases calculated by applying annual national survival weights to the most recent 10 years of reported AIDS cases and adding the totals from each year. This method for estimating cases was first included in the CARE Act Amendments of 1996. At that time, this approach captured the vast majority of living AIDS cases. However, some persons with AIDS now live more than 10 years after their cases are first reported, and they are not accounted for by this formula. Thus, like the 5-year reported case counts, ELCs can misrepresent the number of living AIDS cases in an area in part by not taking into account those persons living with AIDS whose cases were reported more than 10 years earlier. When total Title I and Title II funding is considered, states with EMAs and Puerto Rico receive more funding per ELC than states without EMAs because cases within EMAs are counted twice, once in connection with Title I base grants and once for Title II base grants. Eighty percent of the Title II base grant is determined by the total number of ELCs in the state or territory. The remaining 20 percent is based on the number of ELCs in each jurisdiction outside of any EMA. This 80/20 split was established by the 1996 CARE Act amendments to address the concern that grantees with EMAs received more total Title I and Title II funding per case than grantees without EMAs. However, even with the 80/20 split, states with EMAs and Puerto Rico receive more total Title I and Title II funding per ELC than states without EMAs. States without EMAs receive no funding under Title I, and thus, when total Title I and Title II funds are considered, states with EMAs and Puerto Rico receive more funding per ELC. Table 2 shows that the higher the percentage of a state’s ELCs within EMAs, the more that state received in total Title I and Title II funding per ELC. The two-tiered division of Emerging Communities also results in disparities in funding among metropolitan areas. Title II provides for a minimum of $10 million to states with metropolitan areas that have 500 to 1,999 AIDS cases reported in the last 5 calendar years but do not qualify for funding under Title I as EMAs. The funding is equally split so that half the funding is divided among the first tier of communities with 500 to 999 reported cases in the most recent 5 calendar years while the other half is divided among a second tier of communities with 1,000 to 1,999 reported cases in that period. In fiscal year 2004, the two-tiered structure of Emerging Communities funding led to large differences in funding per reported AIDS case in the last 5 calendar years among the Emerging Communities because the total number of AIDS cases in each tier was not equal. Twenty-nine communities qualified for Emerging Communities funds in fiscal year 2004. Four of these communities had 1,000 to 1,999 reported AIDS cases in the last 5 calendar years and 25 communities had 500 to 999 cases. This distribution meant that the 4 communities with a total of 4,754 reported cases in the last 5 calendar years split $5 million while the remaining 25 communities with a total of 15,994 reported cases in the last 5 calendar years also split $5 million. These case counts resulted in the 4 communities receiving $1,052 per reported case while the other 25 received $313 per reported case. Table 3 lists the 29 Emerging Communities along with their reported AIDS case counts over the most recent 5 years and their funding. Titles I and II of the CARE Act both contain provisions that protect certain grantees’ funding levels. Title I has a hold-harmless provision that guarantees that the Title I base grant to an EMA will be at least as large as a statutorily specified percentage of a previous year’s funding. The Title I hold-harmless provision has primarily protected the funding of one EMA, San Francisco. If an EMA qualifies for hold-harmless funding, that amount is added to the base funding and distributed together as the base grant. In fiscal year 2004, the San Francisco EMA received $7,358,239 in hold-harmless funding, or 91.6 percent of the hold-harmless funding that was distributed. The second largest recipient was Kansas City, which received $134,485, or 1.7 percent of the hold-harmless funding under Title I. Table 4 lists the EMAs that received hold-harmless funding in fiscal year 2004. Because San Francisco’s Title I funding reflects the application of hold-harmless provisions under the 1996 amendments, as well as under current law, San Francisco’s Title I base grant is determined in part by the number of deceased cases in the San Francisco EMA as of 1995. More than half of the 51 EMAs received Title I funding in fiscal year 2004 even though they were below Title I eligibility thresholds. The eligibility of these EMAs was protected based on a CARE Act grandfather clause. Under a grandfather clause established by the CARE Act Amendments of 1996, metropolitan areas eligible for funding for fiscal year 1996 remain eligible for Title I funding even if the number of reported cases in the most recent 5 calendar years drops below the statutory threshold. We found that in fiscal year 2004, 29 of the 51 EMAs did not meet the eligibility threshold of more than 2,000 reported AIDS cases during the most recent 5 calendar years but nonetheless retained their status as EMAs (see fig. 1). The number of reported AIDS cases in the most recent 5 calendar years in these 29 EMAs ranged from 223 to 1,941. Title I funding awarded to these 29 EMAs was about $116 million, or approximately 20 percent of the total Title I funding. Title II has a hold-harmless provision that ensures that the total of Title II and ADAP base grants awarded to a grantee will be at least as large as the total of these grants a grantee received the previous year. This provision has the potential of reducing the amount of funding to grantees that have demonstrated severe need for drug treatment funds because the hold- harmless provision is funded out of amounts that would otherwise be used for that purpose. Fiscal year 2004 was the first time that any grantees triggered this provision. Severe Need grants are funded by a 3 percent set- aside of the funds appropriated specifically for ADAPs. Eight states became eligible for this hold-harmless funding in fiscal year 2004. In 2004, the 3 percent set-aside for Severe Need grants was $22.5 million. Of these funds, $1.6 million, or 7 percent, was used to provide this Title II hold- harmless protection. (See table 5.) The remaining $20.8 million, or 93 percent of the set-aside amount, was distributed in Severe Need grants. The total amount of Severe Need grant funds available in fiscal year 2004 to distribute among the eligible grantees was less than it would have been without the hold-harmless payments. However, in fiscal year 2004 not all 25 of the Title II grantees eligible for Severe Need grants made the match required to receive such grants. In future years, if all of the eligible Title II grantees make the match, and if there are also grantees that qualify to receive hold-harmless funds under this provision, grantees with severe need for ADAP funding would get less than the amounts they would otherwise receive. CARE Act funding for Title I, Title II, and ADAP base grants would have shifted among grantees if HIV case counts had been used with ELCs, instead of ELCs alone, to allocate fiscal year 2004 formula grants. Our analyses indicate that up to 13 percent of funding would have shifted among grantees if HIV case counts and ELCs had been used to allocate the funds and if the hold-harmless and minimum-grant provisions we considered were maintained. Some individual grantees would have had changes that more than doubled their funding. Grantees in the South and Midwest would generally have received more funding if HIV cases were used in funding formulas along with ELCs. However, there would have been grantees that would have received increased funding and grantees that would have received decreased funding in every region of the country. Funding changes in our model would have been larger without the hold- harmless and minimum-grant provisions that we included. Changes in CARE Act funding levels for Title I base grants, Title II base grants, and ADAP base grants caused by shifting to HIV cases and ELCs would be larger—up to 24 percent—if the current hold-harmless or minimum-grant amounts were not in effect. One explanation for the changes in funding allocations when HIV cases and ELCs are used instead of only ELCs is the maturity of HIV case- reporting systems. Case-reporting systems need several years to become fully operational. We found that those grantees that would receive increased funding from the use of HIV cases tend to be those with the oldest HIV case-reporting systems. Those grantees with the oldest reporting systems include 11 southern and 8 midwestern states whose HIV-reporting systems were implemented prior to 1995. Funding changes can also be linked to whether a jurisdiction has a name- or code-based system. CDC will only accept name-based case counts as no code-based system had met its quality criteria as of January 2006. CDC does not accept the code-based data principally because methods have not been developed to make certain that a code-reported HIV case is only being counted once across all reporting jurisdictions. As a result, if HIV case counts were used in funding formulas, HIV cases reported using codes rather than names would not be counted in distributing CARE Act funds. However, even if code-based data were incorporated into the CDC case counts, the age of the code-based systems could still be a factor since the code-based systems tend to be newer than the name-based systems. As of December 2005, 12 of the 13 code-based systems were implemented in 1999 or later, compared with 10 of the 39 name-based systems. The effect of the maturity of the code-based systems could be increased if, as CDC believes, name-based systems can be executed with more complete coverage of cases in much less time than code-based systems. As a result, jurisdictions with code-based systems could find themselves with undercounts of HIV cases for longer periods of time than jurisdictions with name-based systems. Figure 2 shows the 39 jurisdictions where HIV case counts are accepted by CDC and the 13 jurisdictions where they are not accepted, as of December 2005. The use of HIV cases in CARE Act funding formulas could result in fluctuations in funding over time because of newly identified preexisting HIV cases. Grantees with more mature HIV-reporting systems have generally identified more of their HIV cases. Therefore, if HIV cases were used to distribute funding, these grantees would tend to receive more funds. As grantees with newer systems identify and report a higher percentage of their HIV cases, their proportion of the total number of ELCs and HIV cases in the country would increase and funding that had shifted away from states with newer HIV-reporting systems would shift back, creating potentially significant additional shifts in program funding. The funding provided under the CARE Act has filled important gaps in communities throughout the country, but as Congress reviews CARE Act programs, it is important to understand how much funding can vary across communities with comparable numbers of persons living with AIDS. In our report, we raised several matters for Congress to consider when reauthorizing the CARE Act. We reported in February 2006 that if Congress wishes CARE Act funding to more closely reflect the distribution of persons living with AIDS, and to more closely reflect the distribution of persons living with HIV/AIDS when HIV cases are incorporated into the funding formulas, it should take the following five actions: revising the funding formulas used to determine grantee eligibility and grant amounts using a measure of living AIDS cases that does not include deceased cases and reflects the longer lives of persons living with AIDS, eliminating the counting of cases in EMAs for Title I base grants and again for Title II base grants, modifying the hold-harmless provisions for Title I, Title II, and ADAP base grants to reduce the extent to which they prevent funding from shifting to areas where the epidemic has been increasing, modifying the Title I grandfather clause, which protects the eligibility of metropolitan areas that no longer meet the eligibility criteria, and eliminating the two-tiered structure of the Emerging Communities program. We also reported that if Congress wishes to preserve funding for the ADAP Severe Need grants, it should revise the Title II hold-harmless provision that is funded with amounts set aside for ADAP Severe Need Grants. In commenting on our draft report HHS generally agreed with our identification of issues in the funding formulas. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information regarding this statement, please contact Marcia Crosse at (202) 512-7119 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. James McClyde, Assistant Director; Robert Copeland; Cathy Hamann; Opal Winebrenner; Craig Winslow; and Suzanne Worth contributed to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The CARE Act, a federal effort to address the HIV/AIDS epidemic, is administered by HHS. The Act uses formulas based upon a grantee's number of AIDS cases to distribute funds to eligible metropolitan areas (EMA), states, and territories. The use of AIDS cases was prescribed because most jurisdictions tracked and reported only AIDS cases when the grant programs were established. HIV cases must be incorporated with AIDS cases in CARE Act formulas no later than fiscal year 2007. GAO was asked to discuss factors that affect the distribution of CARE Act funding. This testimony is based on HIV/AIDS: Changes Needed to Improve the Distribution of Ryan White CARE Act and Housing Funds, GAO-06-332 (Feb. 28, 2006). GAO discusses how specific funding-formula provisions contribute to funding differences among CARE Act grantees and what distribution differences could result from using HIV cases in CARE Act funding formulas. Multiple provisions in the CARE Act grant funding formulas as enacted result in funding not being comparable per AIDS case across grantees. First, the CARE Act uses measures of AIDS cases that do not accurately reflect the number of persons living with AIDS. For example, the statutory funding formulas require the use of cumulative AIDS case counts, which could include deceased cases. Second, CARE Act provisions related to metropolitan areas result in variability in the amounts of funding per AIDS case among grantees. For example, AIDS cases within EMAs are counted once for determining funding under Title I of the CARE Act for EMAs and again under Title II for determining funding for the states and territories in which those EMAs are located. As a result, states with EMAs receive more total funding per AIDS case than states without EMAs. Third, CARE Act hold-harmless provisions under Titles I and II and the grandfather clause for EMAs under Title I sustain funding and eligibility of CARE Act grantees on the basis of a previous year's measurements of the number of AIDS cases in these jurisdictions. For example, the CARE Act Title I hold-harmless provision results in one EMA continuing to have deceased AIDS cases factored into its allocation because its hold-harmless funding dates back to the mid-1990s when formula funding was based on a count of AIDS cases from the beginning of the epidemic. If HIV case counts had been incorporated along with the number of estimated living AIDS cases (ELC) in allocating fiscal year 2004 CARE Act grants instead of ELCs alone, funding would have shifted among jurisdictions. Grantees in the South and the Midwest generally would have received more funding if HIV cases were used in the funding formulas, but there would have been grantees that would have received increased funding and grantees that would have received decreased funding in every region of the country. Although CARE Act grantees have established HIV case-reporting systems, differences between these systems--in their maturity and reporting methods, for instance--would have affected the distribution of CARE Act funds based on ELCs and HIV case counts. Grantees with more mature HIV-reporting systems would tend to receive more funds.
Since September 11, 2001, there has been an increase in the funding for research in biomedicine. This increase is intended to develop effective medical countermeasures, against emerging infectious diseases and biological agents, which can only be performed safely in BSL-3 and BSL-4 labs. A large part of this funding has been used to construct additional high-containment BSL-3 and BSL-4 labs. The BSL labs are classified by the type of agents used and the risk posed to personnel, the environment, and the community by those agents. The Department of Health and Human Services’s (HHS) Biosafety in Microbiological and Biomedical Laboratories (BMBL) guidelines specify four biosafety levels, with BSL-4 being the highest. The levels include combinations of laboratory practices and techniques, safety equipment, and facilities that are recommended for labs that conduct research on potentially dangerous agents and toxins. These labs are to be designed, constructed, and operated in a manner to (1) prevent accidental release of infectious or hazardous agents within the laboratory and (2) protect lab workers and the environment external to the lab, including the community, from exposure to the agents. Work in BSL-3 labs involves agents that may cause serious and potentially lethal infection. In some cases, there are vaccines or effective treatments available. Types of agents that are typically handled in BSL-3 labs include, for example, anthrax, West Nile Virus, Q fever, tularemia, and avian flu. Work in BSL-4 labs involves the most dangerous agents for which there are no effective vaccines or treatments available. Types of agents that are typically handled in BSL-4 labs include, for example, Ebola, hemorrhagic fevers, and smallpox. Many different federal agencies have some connection with BSL-3 and BSL-4 labs in the United States. These agencies are involved with these labs in various capacities, including as users, owners, regulators, and funding sources. For example, the Centers for Disease Control and Prevention (CDC) has its own high-containment labs and regulates that portion of labs working with select agents and toxins that represent a risk to human health and safety. Similarly, the U.S. Department of Agriculture (USDA) has its own labs and regulates labs working with select agents and toxins posing a risk to animal and plant health. The NIAID has its own labs and is a major funding source for construction and research involving high-containment labs. The National Institutes of Health (NIH) both funds research requiring high containment and provides guidance that is widely used to govern many of the activities in high-containment labs. The Food and Drug Administration (FDA) has its own labs and regulates manufacturing of biological products, some of which require high- containment labs. The Department of Commerce (DOC) regulates the export of agents and equipment that have both military and civilian uses, which are often found in high-containment labs. The Department of Defense (DOD) has its own labs and funds research requiring high- containment labs. The Department of Labor’s (DOL) Occupational Safety and Health Administration (OSHA) regulates some activities within high- containment labs, as well as general safety in most high-containment labs. The Department of State (DOS) regulates the export of agents and equipment that are specifically designed for military use from defense- related high-containment labs and maintains a listing of some high- containment labs, as part of the U.S. commitments under the Biological and Toxin Weapons Convention (BWC). The Department of Justice’s (DOJ) Federal Bureau of Investigation (FBI) uses high-containment labs when their forensic work involves dangerous biological agents. The Department of Homeland Security (DHS) has its own labs and funds a variety of research requiring high-containment labs. The Department of Energy (DOE) has several BSL-3 labs doing research to develop detection and response systems to improve preparedness for biological attack. The Department of Interior (DOI) has its own BSL-3 labs for work with infectious animal diseases. The Department of Veterans Affairs (VA) has research and clinical BSL-3 labs for its work with veterans. The Environmental Protection Agency (EPA) has its own labs and also coordinates use of various academic, state, and commercial high- containment labs nationwide, as part of its emergency response mission. The Antiterrorism and Effective Death Penalty Act of 1996 includes provisions to regulate the transfer, between laboratories, of certain biological agents and toxins and requires the Secretary of HHS to implement these provisions. As part of the implementation of this act, the first list of regulated biological agents was created. This became known as the select agent list. The Public Health Security and Bioterrorism Preparedness and Response Act of 2002 revised and expanded the Select Agent Program. Among other requirements, the new law (1) revised the list of agents deemed “select agents,” which possess the “potential to pose a severe threat” to public health and safety, to animal or plant health, or to animal or plant products; (2) directed the Secretaries of HHS and Agriculture to biennially review and publish the select agent list, making revisions as appropriate to protect the public; (3) required all facilities possessing select agents to register with the Secretary of HHS, Agriculture, or both, not just those facilities sending or receiving select agents; (4) restricted access to biological agents and toxins by persons who do not have a legitimate need and who are considered a risk by federal law enforcement and intelligence officials; (5) required transfer registrations to include information regarding the characterization of agents and toxins to facilitate their identification, including their source; (6) required the creation of a national database with information on all facilities and persons possessing, using, or transferring select agents; and (7) required the Secretaries of HHS and Agriculture to impose more detailed and different levels of security for different select agents, based on their assessed level of threat to the public. Pertinent guidance includes NIH and CDC BMBL guidance, as well as NIH guidelines. NIH and CDC BMBL Guidance The NIH and CDC prepared the BMBL as a guidance document for working with particular select agents. According to the BMBL guidelines, (1) BSL-1 laboratories house agents and toxins that do not consistently cause disease in healthy adult humans; (2) BSL-2 laboratories are capable of housing agents and toxins that are spread through puncture, absorption through mucous membranes, or ingestion of infectious materials; (3) BSL-3 laboratories are capable of housing agents and toxins that have a potential for aerosol transmission and that may cause serious and potentially lethal infection; (4) BSL-4 laboratories are capable of housing agents and toxins that pose a high individual risk of life-threatening disease, which may be aerosol transmitted and for which there is no available vaccine or therapy. The BMBL states that (1) biosafety procedures must be incorporated into the laboratory’s standard operating procedures or biosafety manual; (2) personnel must be advised of special hazards and are required to read and follow instructions on practices and procedures; and (3) personnel must receive training on the potential hazards associated with the work involved and the necessary precautions to prevent exposure. Further, the BMBL contains guidelines for laboratory security and emergency response, such as controlling access to areas where select agents are used or stored. The BMBL also states that a plan must be in place for informing police, fire, and other emergency responders as to the type of biological materials in use in the laboratory areas. NIH Guidelines for Research Involving Recombinant DNA Molecules Much of the work in BSL-3 and BSL-4 labs in the United States involves recombinant DNA (rDNA), and the NIH Guidelines for Research Involving Recombinant DNA Molecules (NIH rDNA Guidelines) set the standards and procedures for research involving rDNA. Institutions must follow these guidelines when they receive NIH funding for this type of research. The guidelines include the requirement to establish an institutional biosafety committee (IBC). The IBC is responsible for (1) reviewing rDNA research conducted at or sponsored by the institution for compliance with the NIH rDNA Guidelines and (2) approving those research projects that are found to conform with the NIH rDNA Guidelines. IBCs also periodically review ongoing rDNA research to ensure continued compliance with the NIH rDNA Guidelines. The CDC is responsible for the registration and oversight of laboratories that possess, use, or transfer select agents and toxins that could pose a threat to human health. USDA is responsible for the registration and oversight of laboratories that possess, use, or transfer select agents and toxins that could pose a threat to animal or plant health or animal or plant products. Some select agents, such as anthrax, pose a threat to both human and animal health and are regulated by both agencies (see appendix III for the list of select agents and toxins). The select agent regulations require registration for U.S.-based research institutions, government agencies, universities, manufacturers, and other entities that possess, use, or transfer select agents. Registration is for 3 years. As part of the registration process, facilities must demonstrate in their applications that they meet the recommendations delineated in the BMBL for working with particular select agents. Such requirements include having proper laboratory and personal protective equipment, precautionary signage, and ventilation; controlled access; and biosafety operations manuals. Facilities must also describe the laboratory procedures that will be used, provide a laboratory floor plan showing where the select agent will be handled and stored, and describe how access will be limited to authorized personnel. In addition, facilities must describe the objectives of the work that requires the select agent. Each facility must identify a responsible facility official who is authorized to transfer and receive select agents on behalf of the facility. Individuals making false, fictitious, or fraudulent statements on registration forms may be punished, under the False Statements Act, by a fine of up to $250,000, imprisonment up to 5 years, or both. Violations by organizations are punishable by a fine of up to $500,000 per violation. To ensure compliance with these requirements, the program established a goal of inspecting these facilities once during the 3-year registration period. Facilities may be inspected before and after registration, but there is no requirement that inspections be performed. An expansion in the number of BSL-3 and BSL-4 labs is taking place across most of the United States, according to the literature, federal agency officials, and experts. Most federal officials and experts believe that the number of BSL-4 labs in the United States is generally known. But the number of BSL-3 labs is unknown. Information on expansion is available about high-containment labs that are (1) registered with the CDC-USDA’s Select Agent Program, and (2) federally funded. However, much less is known about the expansion of labs outside the Select Agent Program and the nonfederally funded labs, including location, activities, and ownership. For both BLS-3 and BSL-4, the expansion is taking place across many sectors—federal, state, academic, and private—and all over the United States. For most of the last 50 years, there were only two sites with BSL-4 labs in the United States. These were federal labs at the U.S. Army’s Research Institute for Infectious Diseases (USAMRIID) in Fort Detrick, Maryland, and at the CDC in Atlanta, Georgia. Between 1990 and 2000, three new BSL-4 labs were built: a BSL-4 lab at Georgia State University in Atlanta— the first BSL-4 lab in a university setting; a small BSL-4 lab on the NIH campus in Bethesda, Maryland; and a privately funded BSL-4 lab in San Antonio, Texas. Since the terror attacks of 2001, nine new facilities and one major remodeling effort containing BSL-4 space will either be operational, in construction, or in planning by this year’s end. The number of BSL-4 laboratories has increased from 5, before 2001, to 15, including at least 1 in planning. Moreover, expansion is taking place across all sectors. Before 1990, all BSL-4 labs were federal labs—either at USAMRIID or at the CDC. Today, while expansion is taking place within the federal sector as well—there are seven new federal facilities recently built, currently under construction, or planned, which have one or more BSL-4 labs—there are also BSL-4 labs at universities, as part of state response, and in the private sector. (See table 1 for expansion in BSL-4 labs by sector.) While the number is difficult to quantify, many more BSL-3 labs are thought to exist compared with BSL-4 labs. Many lab owners—when building new labs or upgrading existing ones—are building to meet BSL-3 level containment, often anticipating future work, even though they intend for some time to operate at the BSL-2 level with BSL-2 recommended agents. In addition, much biodefense work, for example, involves aerosolization of agents for challenge studies, and most of this type of activity is often recommended for containment at the BSL-3 level. The expansion of BSL-3 labs is in all sectors. However, the only definitive data available are on labs registered with the CDC-USDA Select Agent Program. Within that program, two-thirds of registered BSL-3 labs are outside the federal sector (see table 2). Within the academic sector, for example, NIAID has provided funding for 13 Regional Biocontainment Laboratories (RBL) to provide regional BSL-3 capability for academic research requiring such containment. Expansion at the state level is also taking place (see table 3). According to a survey conducted by the Association of Public Health Laboratories (APHL) in August 2004, since 2001 state public health labs have used public health preparedness funding to build, expand, and enhance BSL-3 labs. In 1998, for example, APHL found that 12 of 38 responding states reported having a state public health laboratory at the BSL-3 level. Today, at least 46 states have at least one state public health BSL-3 lab. Expansion of BSL-3 and BSL-4 labs is starting to take place geographically as well as by sector. For example, before 1990, BSL-4 labs were clustered at either USAMRIID at Fort Detrick or at CDC. Today, there are BSL-4 labs built, under construction, or in planning in four states other than Maryland and Georgia. The expansion of BSL-3 labs is widespread across most states. Because of the need for individual state response to bioterrorist threats, most states now have some level of BSL-3 capacity—at least for diagnostic and analytical services—in support of emergency response. In addition, within the academic research community, the RBLs being constructed by the NIAID are intended to provide regional BSL-3 laboratory capacity to support NIAID’s Regional Centers of Excellence for Biodefense and Emerging Infectious Diseases Research (RCE). Hence, the RBLs are distributed regionally around the country. Operational, under construction, or currently planned BSL-4 labs and some of the major BSL-3 facilities in the United States are shown in figure 1. No single federal agency has the mission to track and determine the risk associated with the expansion of BSL-3 and BSL-4 labs in the United States, and no single federal agency knows how many such labs there are in the United States. Consequently, no one is responsible for determining the aggregate risks associated with the expansion of these high- containment labs. None of the federal agencies that responded to our survey indicated that they have the mission to track and know the number of BSL-3 and BSL-4 labs within the United States (see table 4). Some federal agencies do have a narrow mission to track a subset of BSL-3 and BSL-4 labs, and they do know the number of those labs. For example, the CDC and USDA together know the number of high-containment labs working with select agents because, by federal regulation, such labs are required to register with them. But these regulations only require that the entities registering with the Select Agent Program do a risk assessment of their individual labs. No agency, therefore, has the mission to determine the aggregate risks associated with the expansion of high-containment labs that work with select agents. According to the federal agency officials, the oversight of these labs is fragmented and relies on self- policing. While the number and location of all BSL-3 and BSL-4 labs is not known, several federal agencies indicated that they have a need to know this information in support of their agency missions. Some intelligence agencies, for example, indicated that they need to know a subset of the number and location of high-containment labs within the United States because these labs represent a capability that can be misused by terrorists or people with malicious intent. Without knowledge of the number and location of the BSL-3 and BSL-4 labs, some agencies’ work is made more difficult. For example, the FBI has a need to know the number and location of BSL-3 and BSL-4 labs for forensic purposes. Without this information, the FBI’s work is made more difficult. According to the experts, there is a baseline risk associated with any high- containment. With expansion, the aggregate risks will increase. However, the associated safety and security risks will be greater for new labs with less experience. In addition, high-containment labs have health risks for individual lab workers as well as the surrounding community. According to a CDC official, the risks due to accidental exposure or release can never be completely eliminated, and even labs within sophisticated biological research programs—including those most extensively regulated—have had and will continue to have safety failures. In addition, while some of the most dangerous agents are regulated under the CDC-USDA’s Select Agent Program, many high-containment labs work with agents not covered under this program. Labs outside the Select Agent Program also pose risks, given that many unregulated agents can cause severe illness or even death (see appendix IV for a list of some agents, but not select agents, recommended to be worked on in high-containment labs). These labs also have associated risks because of their potential as targets for terrorism or theft from either external or internal sources. Even labs outside the Select Agent Program can pose security risks in that such labs represent a capability that can be paired with the necessary agents to become a threat. While the United States has regulations governing select agents, many nations do not have any regulations governing the transfer or possession of dangerous biological agents. We identified six lessons from three recent incidents: failure to report to CDC exposures to select agents, in 2006, by TAMU (see appendix V); power outage at CDC’s new BSL-4 lab, in 2007; and the release of foot-and- mouth disease virus, in 2007, at Pirbright, the U.K. These lessons highlight the importance of (1) identifying and overcoming barriers to reporting in order to enhance biosafety through shared learning from mistakes and to assure the public that accidents are examined and contained; (2) training lab staff in general biosafety as well as in specific agents being used in the labs to ensure maximum protection; (3) developing mechanisms for informing medical providers about all the agents that lab staff work with to ensure quick diagnosis and effective treatment; (4) addressing confusion over the definition of exposure to aid in the consistency of reporting; (5) ensuring that BSL-4 labs’ safety and security measures are commensurate with the level of risk these labs present; and (6) maintenance of high-containment labs to ensure integrity of physical infrastructure over time. While the Select Agent Program and the rDNA Guidelines have reporting requirements, institutions sometimes fail to report incidents. According to CDC, there were three specific types of incidents that TAMU officials failed to report to CDC: (1) multiple incidents of exposure, including illness; (2) specific types of experiments being conducted by researchers; and (3) missing vials and animals. In addition, in November 2006, during our first visit to TAMU—a meeting in which all key officials who knew about these incidents were present— we asked if there had been any incident in which a lab worker was exposed to a select agent. TAMU officials did not disclose any of these incidents. Moreover, in August 2007, during our second visit, the biosafety officer said that he had conducted an investigation of the incident, in which the lab worker was exposed to Brucella, and wrote a report. However, the report that was provided to us was dated June 17, 2006, but discussed other incidents that had occurred in 2007, a discrepancy that TAMU failed to explain to us. According to the literature and discussion with federal officials and experts, accidents in labs are expected, mostly as a result of human error due to carelessness, inadequate training, or poor judgment. In the case of theft, loss, occupational exposure, or release of the select agent, the lab must immediately report certain information to CDC or USDA. However, there is a paucity of information on barriers to reporting by institutions. It has been suggested that there is a disincentive to report acquired infections and other mishaps at research institutions because of (1) negative publicity for the institution or (2) the scrutiny from a granting agency, which might result in the suspension of research or an adverse effect on future funding. Further, it is generally believed that when a worker acquires an infection in the lab, it is almost always his or her fault, and neither the worker nor the lab is interested in negative publicity. In order to enhance reporting, barriers need to be identified and targeted strategies need to be applied to remove those barriers. It is also important that these incidents be analyzed so (1) biosafety can be enhanced through shared learning from mistakes and (2) the public may be reassured that accidents are thoroughly examined and contained. One possible mechanism for analysis, discussed in the literature, is the reporting system used for aviation incidents, administered by the National Transportation Safety Board and the Federal Aviation Administration. When mistakes are made, they are analyzed and learned from without being attributed to any one individual. Experts have agreed that some form of personal anonymity would encourage reporting. Training is a key requisite for safe and secure work with dangerous agents. Moreover, it is important that this training is specific to the agent to be worked with and activities to be performed. The lab worker at TAMU who was exposed was not authorized to work with Brucella but was, we were told, being escorted in the lab only to help out with the operating of the aerosolization chamber. According to the select agent regulations, all staff are required to be trained in the specifics of any agent before they work with it. However, the worker did not receive training in the specifics of Brucella, including its characteristics, safe handling procedures, and potential health effects. While the worker was experienced in general BSL-3 procedures, her normal work regimen involved working with Mycobacterium tuberculosis, and her supervisor surmised that the differential potential for infection from Brucella was partially to blame for the exposure. In particular, the exposed lab worker was highly experienced in handling M. tuberculosis, an infectious agent. A lab director of a BSL-2 lab for the last 5 years, she had a PhD in medical sciences and was, by many accounts, highly competent and reliable. She had applied the procedures governing safe work with M. tuberculosis to the Brucella experiment. However, her experience with M. tuberculosis might have provided a false sense of security. Had training been given in Brucella, the worker might have been more aware when cleaning the aerosol chamber. Typical routes of infection differ between M. tuberculosis and Brucella and normal procedures, including gowning and respiratory equipment, vary between the two agents. For example, the lab worker wore protective glasses, but they were not tight fitting. This was adequate when working with M. tuberculosis, but not with Brucella. The investigation concluded that the agent entered the lab worker through the eyes. According to one expert who has managed high-containment labs, there are risks working alternately in BSL-2 and BSL-3 labs, with their different levels of procedures and practices. The fear is that lab workers may develop a routine with BSL-2 procedures that might be difficult to consciously break when working with the more dangerous agents and activities requiring BSL-3 containment. Severe consequences for the worker can result from delays in (1) recognizing when an exposure has occurred or (2) medical providers’ accurately diagnosing any resulting infection. Further, if the worker acquires a disease that is easily spread through contact, there can also be severe consequences for the surrounding community. In the Brucella incident at TAMU, at the time of the exposure on February 9, 2006, the lab worker did not know she was infected nor did anyone else in the lab. In fact, the CDC conducted a routine inspection of TAMU on February 22, 2006—13 days after the exposure—but had no way of knowing that it had happened. According to the exposed worker, it was more than 6 weeks after the exposure that she first fell ill. Then, the first consultation with her physician indicated that she had the flu; it was only after the symptoms persisted that a consultation with an infectious disease specialist confirmed that her blood contained an unknown microorganism. It was at this point that she recalled her work with Brucella weeks earlier. Confirmation of infection with brucellosis was made on April 16, 2006, by the Texas State Public Health Lab—62 days after the exposure. During much of this time, the worker had resumed her normal activities, interacting with many people. In fact, the exposed lab worker had become seriously ill and the delay in recognizing her infection as brucellosis aggravated her condition. Such misdiagnosis is not uncommon with infectious diseases, as the initial symptoms often appear flu-like and brucellosis is not generally endemic in the population. If the worker had not recalled the experiment with Brucella and alerted her physician to this fact, according to the CDC, she might have developed an even more severe infection, possibly affecting her central nervous system or the lining of her heart. In this incident, it was also fortunate that the disease was such that transmission beyond the initial exposed individual was difficult and that there were no risk of spread to the surrounding community. While brucellosis is not easily transferred between humans, many agents cause diseases that are easily transferred from human to human through coughing or fluid transfer, including some agents that are not select agents, such as SARS and tuberculosis. According to BMBL, the causative incident for most laboratory-acquired infections is often unknown. It can only be concluded that an exposure took place after a worker reports illness—with symptoms suggestive of a disease caused by the relevant agent—some time later. Since clinical symptoms can take weeks to become apparent, during which time an infected person may be contagious, it is important that exposure be identified as soon as possible and proper diagnosis and prompt medical treatment provided. In addition to the incident of exposure to Brucella, the CDC noted several incidents of potential exposure to Coxiella burnetii that TAMU had failed to report. While the Brucella exposure eventually became apparent because of clinical symptoms in the lab worker, the C. burnetii incidents illustrate situations where the determination of exposure can be more problematic. In attempting to address the failure to report, questions were raised about what constitutes sufficient evidence of an exposure that the entity must report to the CDC. One indication of exposure that can be used for C. burnetii and other agents is to periodically measure the titer levels—antibody levels—within the blood serum of lab workers working with those agents. If a person has a raised level over his or her baseline level, then a conclusion can be drawn that the person has been exposed to the agent. However, there are issues with using titer levels as an indication of exposure. For example, determining when the exposure took place is not straightforward. TAMU has a program of monitoring blood serum for workers with C. burnetii—a select agent and the causative agent for Q fever in humans. While humans are very susceptible to Q fever, only about one-half of all people infected with C. burnetii show signs of clinical illness. During the CDC inspection, triggered by the uncovering of the Brucella incident, CDC came across clinical records that showed that several lab workers were found to have elevated titers for C. burnetii. But no reports had been sent to the CDC. The CDC noted this issue and, on April 24, 2007, TAMU submitted the required Form 3 to report the possible exposure. However, as a result of subsequent discussion with the individuals who had the elevated titers, TAMU officials began to have doubts about whether or not the elevated titers resulted from exposures that had occurred at TAMU. In one case, TAMU said, one of the infected lab workers had only recently been hired by TAMU but had worked in a clinical lab in China, where C. burnetii was known to have been present. In another, the worker claimed to have been exposed many years earlier and had always registered high, although the actual levels varied. CDC officials disagree with this interpretation and believe the high titers resulted from exposures at TAMU. TAMU initially responded to the uncovering of the elevated titer incidents by reporting, to the CDC, any subsequent elevated titer level uncovered in any of their lab workers. But TAMU is now unsure how to proceed. It has notified the CDC that, in its opinion, an exposure suggested by an elevated titer should be defined to have occurred only after clinical symptoms appear in the individual. TAMU has, therefore, ceased reporting incidents of merely elevated titers. In the absence of clarity over the definition of exposure, TAMU officials have chosen to define it as they see fit. When we asked the CDC about the confusion over the definition of an exposure, officials agreed that terms need to be clearly defined and are drafting new guidance. CDC officials noted, however, that it is unwise to wait until clinical symptoms appear before determining that an exposure has taken place, as this could potentially endanger a worker’s life and potentially, in the case of a communicable disease, others. Experts have told us that correctly interpreting the meaning of elevated titers—whose characteristics can vary by agent, host, and testing lab—is challenging since many serological testing methods have not been validated. Gaps in the scientific understanding of infectious diseases— such as the meaning of elevated titers—may become more problematic as the expansion of labs continues. The development of scientifically sound and standardized methods of identifying exposure is critical, so that individual lab owners are not left to determine for themselves what is and what is not reportable. An hour-long power outage, in June 2007, at the CDC’s newest BSL-4 facility raised questions about safety and security, as well as the backup power system design. The incident showed that, even in the hands of experienced owners and operators, safety and security of high- containment labs can still be compromised. The incident also raises concerns about the security of other similar labs being built around the nation. On June 8, 2007, the CDC campus in Atlanta experienced lightning strikes in and around its new BSL-4 facility, and both primary and backup power to that facility were unavailable. The facility was left with only battery power—a condition that provides limited power for functions such as emergency lighting to aid in evacuation. Among other things, the outage shut down the negative air pressure system, one of the important components in place to keep dangerous agents from escaping the containment areas. In looking into the power outage, the CDC determined that, some time earlier, a critical grounding cable buried in the ground outside the building had been cut by construction workers digging at an adjacent site. The cutting of the grounding cable, which had gone unnoticed by CDC facility managers, compromised the electrical system of the facility that housed the BSL-4 lab. According to CDC officials, the new BSL-4 facility is still in preparation to become fully operational and no live agents were inside the facility at the time of the power outage. However, given that the cable was cut, it is apparent that the construction was not supervised to ensure the integrity of necessary safeguards that had been put in place. Further, according to CDC officials, it was not standard procedure to monitor the integrity of the electrical grounding of the new BSL-4 facility. However, CDC has now instituted annual testing of the electrical grounding system. Because of the power outage incident, questions about the design of the backup power system for the new facility resurfaced. When the CDC designed the backup power system for the new BSL-4 facility, it used backup generators at a central utility plant which serve other facilities, as well as functions such as chillers, on campus besides the new BSL-4 facility. According to internal documents provided to us, during design phase for the facility, some CDC engineers had questioned the remotely placed, integrated design rather than a simpler design using local backup generators near the facility. According to CDC facility officials, the full backup power capabilities for the new BSL-4 facility are not in place yet, but are awaiting completion of other construction projects on campus. Once these projects are completed, these officials said, the new BSL-4 facility will have multiple levels of backup power, including the ability to get power from a second central utility plant on campus, if needed. But some CDC engineers that we talked to questioned the degree of complexity in the design. They are worried that an overly integrated backup might be more susceptible to failure. As a result of this power outage incident, CDC officials said, the CDC is doing a reliability assessment for the entire campus power system, which will include the backup power design for the new BSL-4 facility. Some experts have suggested that BSL-4 labs be similar in design to a nuclear power plant, with a redundant backup-to-backup power system, along with adequate oversight. Like such plants, BSL-4 labs are considered targets for terrorists and people with malicious intent. Release of an agent from any of these labs could have devastating consequences. Therefore, appropriate design of labs and adequate oversight of any nearby activities—such as adjacent construction with its potential to compromise buried utilities—are essential. High-containment labs are highly sophisticated facilities, which require specialized expertise to design, construct, operate, and maintain. Because these facilities are intended to contain dangerous microorganisms, usually in liquid or aerosol form, even minor structural defects—such as cracks in the wall, leaky pipes, or improper sealing around doors—could have severe consequences. Supporting infrastructure, such as drainage and waste treatment systems, must also be secure. In August 2007, contamination of foot-and-mouth disease was discovered at several local farms near Pirbright in the U.K., the site of several high- containment labs that work with live foot-and-mouth disease virus. Foot- and-mouth disease is one of the most highly infectious livestock diseases and can have devastating economic consequences. For example, a 2001 epidemic in the U.K. cost taxpayers over £3 billion, including some £1.4 billion paid in compensation for culled animals. Therefore, the U.K. government officials worked quickly to contain and investigate this recent incident. The investigation of the physical infrastructure at the Pirbright site found evidence of long-term damage and leakage of the drainage system servicing the site, including cracked and leaky pipes, displaced joints, debris buildup, and tree root ingress. While the definitive cause of the release has not been determined, it is suspected that contaminated waste water from Pirbright’s labs leaked into the surrounding soil from the deteriorated drainage pipes and that live virus was then carried offsite by vehicles splashed with contaminated mud. The cracked and leaky pipes found at Pirbright are indicative of poor maintenance practice at the site. The investigation found that (1) monitoring and testing for the preventative maintenance of pipework for the drainage system was not a regular practice on site and (2) the investigation found that a contributing factor might have been a difference of opinion over responsibilities for maintenance of a key pipe within the drainage system. High-containment labs are expensive to build and expensive to maintain. Adequate funding for each stage needs to be addressed. Typically, in large- scale construction projects, funding for initial construction comes from one source. But funding for ongoing operations and maintenance comes from somewhere else. For example, in the NIAID’s recent funding of the 13 BSL-3 labs as RBLs and 2 BSL-4 labs as National Biocontainment Labs (NBL), the NIAID contributed to the initial costs for planning, design, construction, and commissioning. But the NIAID did not provide funding to support the operation of these facilities. In this case, the universities themselves are responsible for funding any maintenance costs after initial construction. The Pirbright incident shows that beyond initial design and construction, ongoing maintenance plays a critical role in ensuring that high- containment labs operate safely and securely over time. Because even the smallest of defects can affect safety, ensuring the continuing structural integrity of high-containment labs is an essential recurring activity. The expansion of BSL-3 and BSL-4 labs taking place in the United States is proceeding in a decentralized fashion, without specific requirements as to the number, location, activity, and ownership of such labs. While some expansion may be justified to address deficiencies in lab capacity for the development of medical countermeasures, unwarranted expansion without adequate oversight is proliferation, not expansion. Since the full extent of the expansion is not known, it is unclear how the federal government can ensure that sufficient but not superfluous capacity—that brings with it additional, unnecessary risk—is being created. The limited federal oversight that does exist for high-containment labs is fragmented among different federal agencies, and for the most part relies on self-policing. The inherent weaknesses of an oversight system based on self-policing are highlighted by the Texas A&M University case. While CDC inspected the labs at Texas A&M in April 2006, as part of its routine inspection, its inspectors failed to identify that (1) a worker became exposed and ill; (2) unauthorized experiments were being conducted and unauthorized individuals were entering the labs; and (3) agents and infected animals were missing. It was not until a public advocacy group found out about the Brucella incident and, according to this group, applied pressure—by demanding records about the incident—that TAMU reported this incident to the CDC. This report prompted the subsequent in-depth investigations by the CDC. However, this incident raises serious concerns about (1) how well the CDC polices select agent research being conducted in over 400 high- containment labs at various universities around the country, which are registered under the Select Agent Program, and (2) whether the safety of the public is compromised. Moreover, if similar safety breaches are occurring at other labs, they are not being reported. And the CDC is not finding them either. According to the experts, no one knows whether the Texas A&M incidents are the tip of the iceberg or the iceberg. Mr. Chairman, this concludes my prepared remarks. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For further information regarding this statement, please contact Keith Rhodes, at (202) 512-6412 or rhodesk@gao.gov, or Sushil K. Sharma, Ph.D., Dr.PH, at (202) 512-3460 or sharmas@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. William Carrigg, Jeff McDermott, Jean McSween, Jack Melling, Laurel Rabin, Corey Scherrer, Rebecca Shea, and Elaine Vaurio made key contributions to this statement. To determine the extent of expansion in the number of high-containment facilities and the areas experiencing the growth, we interviewed agency officials and experts, as well as reviewed documents provided by agencies and the literature. To determine which federal agency has the mission to track and determine the aggregate risks associated with the proliferation of BSL-3 and BSL-4 labs in the United States, we surveyed 12 federal agencies that are involved with BSL-3 or BSL-4 labs in some capacity—for example, research, oversight, or monitoring. The survey requested information on the agency’s involvement with high-containment labs—specifically, whether the agency has a mission to track the number of high-containment labs, whether it has a need to know, and whether it knows the number of operating BSL-3 and BSL-4 labs. The agencies that received our survey include the U.S. Department of Agriculture (USDA); the Department of Commerce; the Department of Defense; the Department of Energy; the Environmental Protection Agency; the Department of Health and Human Services (HHS), including the Centers for Disease Control and Prevention (CDC); the Department of Homeland Security; the Department of Interior; the Department of Justice, including the Federal Bureau of Investigation (FBI); the Department of Labor, including Occupational Safety and Health Administration (OSHA); and the Department of States. In addition, we sent our survey to intelligence agencies, including the Central Intelligence Agency (CIA), the National Counter-Terrorism Center (NCTC); the Defense Intelligence Agency (DIA); and the Office of Intelligence Analysis within DHS. We also met with officials of the Select Agent Program at both the CDC and the USDA to gain additional information about the expansion of high-containment labs. Finally, we reviewed documents these agencies provided, including pertinent legislation, regulation, and guidance, and reviewed scientific literature on risks associated with high-containment labs. To develop lessons learned from recent incidents at three high- containment labs, we interviewed academic experts in microbiological research involving human, animal, and plant pathogens, and conducted site visits at selected federal, civilian, military, academic, and commercial BSL-3 and BSL-4 labs, including the sites involved in the recent incidents. Specifically, we conducted site visits to the CDC and Texas A&M University (TAMU); talked to the U.K. officials at Health Safety Executive and the Department for Environment, Food, and Rural Affairs; and reviewed documents and inspection reports. To discuss the incidents at TAMU and the CDC, we conducted site visits and interviewed the relevant officials. We also conducted a site visit to the CDC and interviewed relevant officials, including the officials of CUH2A, Inc.—the contractor who designed the backup power system for the new BSL-4 lab in Atlanta—as well as the expert hired by this firm to conduct the reliability study for the backup power system. The regulations governing the Select Agent Program became effective on April 15, 1997, and were revised in March 2005. The regulations include six primary components: (1) a list of select agents that have the potential to pose a severe threat to public health and safety; (2) registration of facilities before the domestic transfer of select agents; (3) a process to document successful transfer of agents; (4) audit, quality control, and accountability mechanisms; (5) agent disposal requirements; and (6) research and clinical exemptions. For facilities registered with the CDC and the USDA that possess, use, or transfer select agents, the select agent regulations require (1) an FBI security risk assessment for a number of individuals, including each person who is authorized to have access to select agents and toxins; (2) written biosafety and incident response plans; (3) training of individuals with access to select agents and of individuals who will work in or visit areas where select agents or toxins are handled and stored; (4) a security plan sufficient to safeguard the select agent or toxin against unauthorized access, theft, loss, or release, and designed according to a site-specific risk assessment that provides protection in accordance with the risk of the agent or toxin; (5) possible inspection by the CDC or USDA of the facility and its records before issuance of the certificate of registration; (6) maintenance of records relating to the activities covered by the select agent regulations; and (7) facility registration with the CDC or the USDA that indicates (a) each select agent that the entity intends to possess, use, or transfer; (b) the building where the agent will be used and stored; (c) the laboratory safety level; (d) a list of people authorized to have access to each select agent; (e) the objectives of the work for each select agent, including a description of the methodologies or laboratory procedures to be used; (f) a description of the physical security and biosafety plans; and (g) assurance of security and biosafety training for individuals who have access to areas where select agents are handled and stored. HHS Select Agents and Toxins Abrin Cercopithecine herpesvirus 1 (Herpes B virus) Coccidioides posadasii Conotoxins Crimean-Congo haemorrhagic fever virus Diacetoxyscirpenol Ebola virus Lassa fever virus Marburg virus Monkeypox virus Reconstructed 1918 influenza virus Ricin Rickettsia prowazekii Rickettsia rickettsii Saxitoxin Shiga-like ribosome inactivating proteins South American Haemorrhagic Fever viruses Flexal Guanarito Junin Machupo Sabia Tetrodotoxin Tick-borne encephalitis complex (flavi) viruses Central European Tick-borne encephalitis Far Eastern Tick-borne encephalitis Kyasanur Forest disease Omsk Hemorrhagic Fever Russian Spring and Summer encephalitis Variola major virus (Smallpox virus) and Variola minor virus (Alastrim) Yersinia pestis USDA Select Agents and Toxins African horse sickness virus African swine fever virus Akabane virus Avian influenza virus (highly pathogenic) Bluetongue virus (Exotic) Bovine spongiform encephalopathy agent Camel pox virus Classical swine fever virus Cowdria ruminantium (Heartwater) Foot-and-mouth disease virus Goat pox virus Japanese encephalitis virus Lumpy skin disease virus Malignant catarrhal fever virus (Alcelaphine herpesvirus type 1) Menangle virus Mycoplasma capricolum/ M.F38/M. mycoides Capri (contagious caprine pleuropneumonia) Mycoplasma mycoides mycoides (contagious bovine pleuropneumonia) Newcastle disease virus (velogenic) Peste des petits ruminants virus Rinderpest virus Sheep pox virus Swine vesicular disease virus Vesicular stomatitis virus (exotic) Overlap Select Agents and Toxins Bacillus anthracis Botulinum neurotoxins Botulinum neurotoxin producing species of Clostridium Brucella abortus Brucella melitensis Brucella suis Burkholderia mallei (formerly Pseudomonas mallei) Burkholderia pseudomallei (formerly Pseudomonas pseudomallei) Clostridium perfringens epsilon toxin Coccidioides immitis Coxiella burnetii Eastern Equine Encephalitis virus Francisella tularensis Hendra virus Nipah virus Rift Valley fever virus Shigatoxin Staphylococcal enterotoxins T-2 toxin Venezuelan Equine Encephalitis virus USDA Plant Protection and Quarantine (PPQ) Select Agents and Toxins Candidatus Liberobacter africanus Candidatus Liberobacter asiaticus Peronosclerospora philippinensis Ralstonia solanacearum race 3, biovar 2 Schlerophthora rayssiae var zeae Synchytrium endobioticum Xanthomonas oryzae pv. Oryzicola Xylella fastidiosa (citrus variegated chlorosis strain) There are a number of biological agents causing severe illness or death that are not select agents. For example, there are five agents that are recommended for containment at BSL-4 because of (1) their close antigenic relationship with a known BSL-4 agent and (2) the fact that there is insufficient experience working with them (see table 5). BMBL containment and safety recommendations for B. anthracis, the causative agent for anthrax and a select agent, are to include the use of BSL-2 practices, containment equipment, and facilities for clinical and diagnostic quantities of infectious cultures. However, BSL-3 practices, containment equipment, and facilities are recommended for (1) work involving production quantities or high concentrations of cultures, screening environmental samples especially with powders, and (2) for activities with a high potential for aerosol production. Safety and containment recommendations for some agents, which are not regulated under the Select Agent Program, are as strict or stricter than the recommendations for B. anthracis. Some nonselect agents, to which containment recommendations at BSL-3 under certain conditions apply, are listed in table 6. TAMU is registered with CDC’s Select Agent Program and approved for work on several select agents. TAMU has several BSL-3 laboratories and works extensively on animal diseases, including those caused by the select agents Brucella melitensis, Brucella abortus, and Brucella suis. Brucella can cause brucellosis in humans, a disease causing flu-like symptoms such as fever and fatigue. But in severe cases, it can cause infections of the central nervous system. TAMU is also registered for use of Coxiella burnetii, an animal agent that can cause Q fever in humans. According to the CDC, in February 2006, a lab worker was helping out with an experiment to aerosolize Brucella. The lab worker had no familiarity with the specifics of working with Brucella, but did have experience working with the aerosol chamber. It was determined that the lab worker got exposed to the agent during cleaning of the chamber after the experiment was run. At the time of the exposure, neither the exposed worker nor anyone else had any indication that an exposure had taken place. In fact, CDC inspectors were on campus days after the Brucella exposure for a routine inspection but uncovered nothing that alerted them to the fact that an incident had taken place. Symptoms did not start to appear in the exposed worker until more than a month after the exposure, and then the symptoms were flu-like. Confirmation of brucellosis was not made until another month had passed and symptoms had worsened. However, once the brucellosis determination had been made, the worker notified appropriate authorities at TAMU. But no report was subsequently made to the CDC as required by federal regulation and a year passed before—by chance—an independent watchdog group reviewing unrelated documentation, acquired through the Freedom of Information Act (FOIA), uncovered the lapse in reporting and forced TAMU to notify the CDC. The subsequent investigation by the CDC revealed a number of other violations of the select agent regulations including (1) TAMU was not authorized to aerosolize Brucella in the first place; (2) a number of lab workers from another BSL-3 lab had tested positive for Coxiella antigens in their blood serum, suggesting potential exposures had taken place for that agent as well, but without reports going to CDC; (3) unauthorized access to select agents and toxins; (4) missing vials and animals; (5) and other protocol and procedural deficiencies. On April 20, 2007, the CDC issued a cease-and-desist order for all work on Brucella within the affected high-containment lab, as well as all aerosolization work at TAMU involving select agent and toxins. That order was subsequently expanded to include all work with select agents and toxins at TAMU—the first time the CDC has ever issued such an order entitywide under the select agent regulations. That order remains in effect as of the date of this testimony. Export Controls: Vulnerabilities and Inefficiencies Undermine System’s Ability to Protect U.S. Interests. GAO-07-1135T. Washington, D.C.: July 26, 2007. Biological Research Laboratories: Issues Associated with the Expansion of Laboratories Funded by the National Institute of Allergy and Infectious Diseases. GAO-07-333R. Washington, D.C.: February 22, 2007. Export Controls: Challenges Exist in Enforcement of an Inherently Complex System. GAO-07-265. Washington, D.C.: December 20, 2006. Export Controls: Agencies Should Assess Vulnerabilities and Improve Guidance for Protecting Export-Controlled Information at Universities. GAO-07-70. Washington, D.C.: December 5, 2006. Defense Technologies: DOD’s Critical Technologies Lists Rarely Inform Export Control and Other Policy Decisions. GAO-06-793. Washington, D.C.: July 26, 2006. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO-06-644. Washington, D.C.: May 19, 2006. Plum Island Animal Disease Center: DHS and USDA Are Successfully Coordinating Current Work, but Long-Term Plans Are Being Assessed. GAO-06-132. Washington, D.C.: December 19, 2005. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: March 8, 2005. Combating Bioterrorism: Actions Needed to Improve Security at Plum Island Animal Disease Center. GAO-03-847. Washington, D.C.: September 19, 2003. Homeland Security: CDC’s Oversight of the Select Agent Program. GAO-03-315R. Washington, D.C.: November 22, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In response to the global spread of emerging infectious diseases and the threat of bioterrorism, high-containment biosafety laboratories (BSL)--specifically biosafety level (BSL)-3 and BSL-4--have been proliferating in the United States. These labs--classified by the type of agents used and the risk posed to personnel, the environment, and the community--often contain the most dangerous infectious disease agents, such as Ebola, smallpox, and avian influenza. This testimony addresses (1) the extent to which there has been a proliferation of BSL-3 and BSL-4 labs, (2) federal agencies' responsibility for tracking this proliferation and determining the associated risks, and (3) the lessons that can be learned from recent incidents at three high-containment biosafety labs. To address these objectives, GAO asked 12 federal agencies involved with high-containment labs about their missions and whether they tracked the number of labs overall. GAO also reviewed documents from these agencies, such as pertinent legislation, regulation, and guidance. Finally, GAO interviewed academic experts in microbiological research. A major proliferation of high-containment BSL-3 and BSL-4 labs is taking place in the United States, according to the literature, federal agency officials, and experts. The expansion is taking place across many sectors--federal, academic, state, and private--and all over the United States. Concerning BSL-4 labs, which handle the most dangerous agents, the number of these labs has increased from 5--before the terrorist attacks of 2001--to 15, including at least 1 in planning stage. Information on expansion is available about high-containment labs that are registered with the Centers for Disease Control and Prevention (CDC) and the U.S. Department of Agriculture's (USDA) Select Agent Program, and that are federally funded. However, much less is known about the expansion of labs outside the Select Agent Program, as well as the nonfederally funded labs, including location, activities, and ownership. No single federal agency, according to 12 agencies' responses to our survey, has the mission to track the overall number of BSL-3 and BSL-4 labs in the United States. Though several agencies have a need to know, no one agency knows the number and location of these labs in the United States. Consequently, no agency is responsible for determining the risks associated with the proliferation of these labs. We identified six lessons from three recent incidents: failure to report to CDC exposures to select agents by Texas A&M University (TAMU); power outage at the CDC's new BSL-4 lab in Atlanta, Georgia; and release of foot-and-mouth disease virus at Pirbright in the United Kingdom. These lessons highlight the importance of (1) identifying and overcoming barriers to reporting in order to enhance biosafety through shared learning from mistakes and to assure the public that accidents are examined and contained; (2) training lab staff in general biosafety, as well as in specific agents being used in the labs to ensure maximum protection; (3) developing mechanisms for informing medical providers about all the agents that lab staff work with to ensure quick diagnosis and effective treatment; (4) addressing confusion over the definition of exposure to aid in the consistency of reporting; (5) ensuring that BSL-4 labs' safety and security measures are commensurate with the level of risk these labs present; and (6) maintenance of high-containment labs to ensure integrity of physical infrastructure over time.
The primary mission of the Federal Aviation Administration (FAA) is to provide a safe, secure, and efficient global aerospace system that contributes to national security and the promotion of U.S. aerospace safety. FAA’s ability to fulfill this mission depends on the adequacy and reliability of the nation’s air traffic control (ATC) systems—a vast network of computer hardware, software, and communications equipment. To accommodate forecasted growth in air traffic and to relieve the problems of aging ATC systems, FAA embarked on an ambitious ATC modernization program in 1981. FAA now estimates that it will spend about $51 billion to replace and modernize ATC systems through 2007. Our work over the years has chronicled many FAA problems in meeting ATC projects’ cost, schedule, and performance goals. As a result of these issues as well as the tremendous cost, complexity, and mission criticality of the modernization program, we designated the program as a high-risk information technology initiative in 1995, and it has remained on our high- risk list since that time. Automated information processing and display, communication, navigation, surveillance, and weather resources permit air traffic controllers to view key information—such as aircraft location, aircraft flight plans, and prevailing weather conditions—and to communicate with pilots. These resources reside at, or are associated with, several ATC facilities—ATC towers, terminal radar approach control facilities, air route traffic control centers (en route centers), flight service stations, and the ATC System Command Center. Figure 2 shows a visual summary of ATC over the continental United States and oceans. Faced with growing air traffic and aging equipment, in 1981, FAA initiated an ambitious effort to modernize its ATC system. This effort involves the acquisition of new surveillance, data processing, navigation, and communications equipment, in addition to new facilities and support equipment. Initially, FAA estimated that its ATC modernization effort would cost $12 billion and could be completed over 10 years. Now, 2 decades and $35 billion later, FAA expects to need another $16 billion through 2007 to complete key projects, for a total cost of $51 billion. Over the past 2 decades, many of the projects that make up the modernization program have experienced substantial cost overruns, schedule delays, and significant performance shortfalls. Our work over the years has documented many of these shortfalls. As a result of these problems, as well as the tremendous cost, complexity, and mission criticality of the modernization program, we designated the program as a high-risk information technology initiative in 1995, and it has remained on our high-risk list since that time. Our work since the mid-1990s has pinpointed root causes of the modernization program’s problems, including (1) immature software acquisition capabilities, (2) lack of a complete and enforced system architecture, (3) inadequate cost estimating and cost accounting practices, (4) an ineffective investment management process, and (5) an organizational culture that impaired the acquisition process. We have made over 30 recommendations to address these issues, and FAA has made substantial progress in addressing them. Nonetheless, in our most recent high-risk report, we noted that more remains to be done—and with FAA still expecting to spend billions on new ATC systems, these actions are as critical as ever. In March 1997, we reported that FAA’s processes for acquiring software, the most costly and complex component of its ATC systems, were ad hoc, sometimes chaotic, and not repeatable across projects. We also reported that the agency lacked an effective management structure for ensuring software process improvement. As a result, the agency was at great risk of not delivering promised software capabilities on time and within budget. We recommended that FAA establish a Chief Information Officer organizational structure, as prescribed in the Clinger-Cohen Act, and assign responsibility for software acquisition process improvement to this organization. We also recommended several actions intended to help FAA improve its software acquisition capabilities by institutionalizing mature processes. These included developing a comprehensive plan for process improvement, allocating adequate resources to ensure that improvement efforts were implemented, and requiring that projects achieve a minimum level of maturity before being approved. FAA has implemented most of our recommendations. The agency established a Chief Information Officer position that reports directly to the administrator and gave this position responsibility for process improvement. The Chief Information Officer’s process improvement office developed a strategy and led the way in developing an integrated framework for improving maturity in system acquisition, development, and engineering processes. Some of the business organizations within FAA, including the organizations responsible for ATC acquisitions and operations, adopted the framework and provided resources to process improvement efforts. FAA did not, however, implement our recommendation to require that projects achieve a minimum level of maturity before being approved. Officials reported that rather than establish arbitrary thresholds for maturity, FAA intended to evaluate process areas that were most critical or at greatest risk for each project during acquisition management reviews. Recent legislation and an executive order have led to major changes in the way that FAA manages its ATC mission. In April 2000, the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (Air-21) established the position of Chief Operating Officer for the ATC system. In December 2000, executive order 13180 instructed FAA to establish a performance-based organization known as the Air Traffic Organization and to have the Chief Operating Officer lead this organization under the authority of the FAA administrator. This order, amended in June 2002, called for the Air Traffic Organization to enhance the FAA’s primary mission of ensuring the safety, security, and efficiency of the National Airspace System and further improve the delivery of air traffic services to the American public by reorganizing air traffic services and related offices into a performance-based, results-oriented organization. The order noted that as a performance-based organization, the Air Traffic Organization would be able to take better advantage of the unique procurement and personnel authorities currently used by FAA, as well as of the additional management reforms enacted by Congress under Air-21. In addition, the Air Traffic Organization is responsible for developing methods to accelerate ATC modernization, improving aviation safety related to ATC, and establishing strong incentives to agency managers for achieving results. In leading the new Air Traffic Organization, the Chief Operating Officer’s responsibilities include establishing and maintaining organizational and individual goals, a 5-year strategic plan including ATC system mission and objectives, and a framework agreement with the Administrator to establish the new organization’s relationships with other FAA organizations. In August 2003, the first Chief Operating Officer joined the agency and initiated a reorganization combining the separate ATC-related organizations and offices into the Air Traffic Organization. An essential aspect of FAA’s ATC modernization program is the quality of the software and systems involved, which is heavily influenced by the quality and maturity of the processes used to acquire, develop, manage, and maintain them. Carnegie Mellon University’s Software Engineering Institute (SEI), recognized for its expertise in software and system processes, has developed the Capability Maturity Model Integration (CMMI) and a CMMI appraisal methodology to evaluate, improve, and manage system and software development and engineering processes. The CMMI model and appraisal methodology provide a logical framework for measuring and improving key processes needed for achieving high-quality software and systems. The model can help an organization set process improvement objectives and priorities and improve processes; the model can also provide guidance for ensuring stable, capable, and mature processes. According to SEI, organizations that implement such process improvements can achieve better project cost and schedule performance and higher quality products. In brief, the CMMI model identifies 25 process areas—clusters of related practices that, when performed collectively, satisfy a set of goals that are considered important for making significant improvements in that area. Table 1 describes these process areas. The CMMI model provides two alternative ways to view these process areas. One way, called continuous representation, focuses on improving capabilities in individual process areas. The second way, called staged representation, groups process areas together and focuses on achieving increased maturity levels by improving the group of process areas. The CMMI appraisal methodology calls for assessing process areas by determining whether the key practices are implemented and whether the overarching goals are satisfied. Under continuous representation, successful implementation of these practices and satisfaction of these goals result in the achievement of successive capability levels in a selected process area. CMMI capability levels range from 0 to 5, with level 0 meaning that the process is either not performed or partially performed; level 1 meaning that the basic process is performed; level 2 meaning that the process is managed; level 3 meaning that the processes is defined throughout the organization; level 4 meaning that the process is quantitatively managed; and level 5 meaning that the process is optimized. Figure 3 provides details on CMMI capability levels. The Chairman, House Committee on Government Reform, and the Chairman of that Committee’s Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census requested that we evaluate FAA’s software and system development processes used to manage its ATC modernization. Our objectives were (1) to evaluate FAA’s capabilities for developing and acquiring software and systems on its ATC modernization program and (2) to assess the actions FAA has under way to improve these capabilities. To evaluate FAA’s capabilities for developing and acquiring software and systems, we applied the CMMI model (continuous representation) and its related appraisal methodology to four FAA projects. Our appraisers were all SEI-trained software and information systems specialists. In addition, we employed SEI-trained consultants as advisors on our first evaluation to ensure proper application of the model and appraisal methodology. In consultation with FAA officials, we selected four FAA projects with high impact, visibility, and cost, which represented different air traffic domains and reflected different stages of life cycle development. The projects included the Voice Switching and Control System (VSCS), the Integrated Terminal Weather System (ITWS), the En Route Automation Modernization (ERAM) project, and the Airport Surface Detection Equipment–Model X (ASDE-X). The four projects are described in table 2. In conjunction with FAA’s process improvement organization, we identified relevant CMMI process areas for each appraisal. In addition, because system deployment is an important aspect of FAA systems management that is not included in CMMI, we used the deployment, transition, and disposal process area from FAA’s integrated Capability Maturity Model, version 2. For consistency, we merged FAA’s criteria with SEI’s framework and added the standard goals and practices needed to achieve capability level 2. In selected cases, we did not review a certain process area because it was not relevant to the current stage of a project’s life cycle. For example, we did not evaluate supplier agreement management or deployment on VSCS because the system is currently in operation, and these process areas are no longer applicable to this system. Table 3 displays the CMMI process areas that we reviewed for each project. For each process area reviewed, we evaluated project-specific documentation and interviewed project officials to determine whether key practices were implemented and goals were achieved. In accordance with CMMI guidance, we characterized practices as fully implemented, largely implemented, partially implemented, and not implemented, and characterized goals as satisfied or unsatisfied. After combining the practices and goals, the team determined if successive capability levels were achieved. According to the CMMI appraisal method, practices must be largely or fully implemented in order for a goal to be satisfied. Further, all goals must be satisfied in order to achieve a capability level. In order to achieve advanced capability levels, all preceding capability levels must be achieved. For example, a prerequisite for level 2 is the achievement of level 1. As agreed with FAA process improvement officials, we evaluated the projects through capability level 2. Consistent with the CMMI appraisal methodology, we validated our findings by sharing preliminary observations with the project team so that they were able to provide additional documentation or information as warranted. To assess the actions FAA has under way to improve its system and software acquisition and development processes, we evaluated process improvement strategies and plans. We also evaluated the progress the agency has made in expanding its process improvement initiative, both through the maturity of the model and the acceptance of the model by project teams. We also interviewed officials from the offices of the Chief Information Officer and the Chief Operating Officer to determine the effect current changes in the ATC organization could have on the process improvement initiatives. The Department of Transportation and FAA provided oral comments on a draft of this report. These comments are presented in chapter 17. We performed our work from September 2003 through July 2004 in accordance with generally accepted government auditing standards. The purpose of project planning is to establish and maintain plans that define the project activities. This process area involves developing and maintaining a plan, interacting with stakeholders, and obtaining commitment to the plan. As figure 4 shows, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed one more practice (see the overview in table 4 for details). None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the project planning process and in ensuring quality assurance of the process. As a result of these weaknesses, FAA is exposed to increased risks that projects will not meet cost, schedule, or performance goals and that projects will not meet mission needs. Looked at another way, of the 96 practices we evaluated in this process area, FAA projects had 88 practices that were fully or largely implemented and 8 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 5 through 12. Specifically, tables 5 and 6 provide results for VSCS; tables 7 and 8 provide results for ERAM; tables 9 and 10 provide results for ITWS; and tables 11 and 12 provide results for ASDE-X. The purpose of project monitoring and control is to provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan. Key activities include monitoring activities, communicating status, taking corrective action, and determining progress. As shown in figure 5, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed one more practice (see the overview in table 13 for details). None of the four projects satisfied all criteria for the “managing” capability level (level 2). While the projects had differing weaknesses that contributed to this result, a common weakness across most of the projects occurred in the area of ensuring quality assurance of the process. As a result of this weakness, FAA is exposed to increased risks that projects will not meet cost, schedule, or performance goals and that projects will not meet mission needs. Looked at another way, of the 80 practices we evaluated in this process area, FAA projects had 74 practices that were fully or largely implemented and 6 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 14 through 21. Specifically, tables 14 and 15 provide results for VSCS; tables 16 and 17 provide results for ERAM; tables 18 and 19 provide results for ITWS; and tables 20 and 21 provide results for ASDE-X. The purpose of risk management is to identify potential problems before they occur, so that risk-handling activities may be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives. Effective risk management includes early and aggressive identification of risks through the involvement of relevant stakeholders. Early and aggressive detection of risk is important, because it is typically easier, less costly, and less disruptive to make changes and correct work efforts during the earlier phases of the project. As shown in figure 6, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed one more practice (see the overview in table 22 for details). Two of the four FAA projects also satisfied all criteria for the “managed” capability level (level 2) in this process area. While the other projects had differing weaknesses that contributed to this result, common weaknesses across some of the projects occurred in the area of monitoring and controlling the risk management process and in ensuring quality assurance of the process. As a result of these weaknesses, FAA faces increased likelihood that project risks will not be identified and addressed in a timely manner—thereby increasing the likelihood that projects will not meet cost, schedule, or performance goals. Looked at another way, of the 68 practices we evaluated in this key process area, FAA projects had 59 practices that were fully or largely implemented and 9 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 23 through 30. Specifically, tables 23 and 24 provide results for VSCS; tables 25 and 26 provide results for ERAM; tables 27 and 28 provide results for ITWS; and tables 29 and 30 provide results for ASDE-X. The purpose of requirements development is to produce and analyze customer, product, and product-component needs. This process area addresses the needs of relevant stakeholders, including those pertinent to various product life-cycle phases. It also addresses constraints caused by the selection of design solutions. The development of requirements includes elicitation, analysis, validation, and communication of customer and stakeholder needs and expectations. As shown in figure 7, all four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across multiple projects occurred in the areas of training people and in ensuring quality assurance of the requirements development process, as shown in the overview in table 31. As a result of these weaknesses, FAA is exposed to increased risks that projects will not fulfill mission and user needs. Looked at another way, of the 84 practices we evaluated in this key process area, FAA projects had 77 practices that were fully or largely implemented and 7 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 32 through 39. Specifically, tables 32 and 33 provide results for VSCS; tables 34 and 35 provide results for ERAM; tables 36 and 37 provide results for ITWS; and tables 38 and 39 provide results for ASDE-X. The purpose of requirements management is to manage the project’s product components and to identify inconsistencies between requirements and the project’s plans and work products. This process area includes managing all technical and nontechnical requirements and any changes to these requirements as they evolve. As shown in figure 8, all four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area, but none satisfied all criteria for achieving a “managed” capability level (level 2). While the projects had differing weaknesses that contributed to this result, a common weakness across most of the projects occurred in the area of ensuring quality assurance of the requirements management process, as shown in the overview in table 40. As a result of these weaknesses, FAA is exposed to increased risks that projects will not fulfill mission and user needs. Looked at another way, of the 60 practices we evaluated in this key process area, FAA projects had 54 practices that were fully or largely implemented and 6 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 41 through 48. Specifically, tables 41 and 42 provide results for VSCS; tables 43 and 44 provide results for ERAM; tables 45 and 46 provide results for ITWS; and tables 47 and 48 provide results for ASDE-X. The purpose of the technical solution process area is to design, develop, and implement products, product components, and product-related life- cycle processes to meet requirements. This process involves evaluating and selecting solutions that potentially satisfy an appropriate set of allocated requirements, developing detailed designs, and implementing the design. As shown in figure 9, three FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed two more practices (see the overview in table 49 for details). None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the area of ensuring quality assurance of the technical solution process. As a result of this weakness, FAA is exposed to increased risks that projects will not meet mission needs. Looked at another way, of the 72 practices we evaluated in this key process area, FAA projects had 62 practices that were fully or largely implemented and 10 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 50 through 57. Specifically, tables 50 and 51 provide results for VSCS; tables 52 and 53 provide results for ERAM; tables 54 and 55 provide results for ITWS; and tables 56 and 57 provide results for ASDE-X. The purpose of the product integration process is to assemble the product components, ensure that the integrated product functions properly, and deliver the product. A critical aspect of this process is managing the internal and external interfaces of the products and product components, in one stage or in incremental stages. For this process area, we did not perform an appraisal for the ERAM project, because it was at a stage in which product integration was not applicable. As shown in figure 10, the three remaining projects satisfied all criteria for the “performing” capability level (level 1) in this process area. None of the projects satisfied all criteria for the “managing” capability level (level 2). While the projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the product integration process and ensuring quality assurance of the process, as shown in the overview in table 58. As a result of this weakness, FAA is exposed to increased risk that product components will not be compatible, resulting in projects that will not meet cost, schedule, or performance goals. Looked at another way, of the 54 practices we evaluated in this process area, FAA projects had 49 practices that were fully or largely implemented and 5 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 59 through 64. Specifically, tables 59 and 60 provide results for VSCS; tables 61 and 62 provide results for ITWS; and tables 63 and 64 provide results for ASDE-X. The purpose of verification is to ensure that selected work products meet their specified requirements. This process area involves preparing for and performing tests and identifying corrective actions. Verification of work products substantially increases the likelihood that the product will meet the customer, product, and product-component requirements. As shown in figure 11, only one of four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. As shown in the overview in table 65, key weaknesses in preparing and conducting peer reviews prevented the other three projects from achieving level 1. None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the verification process and in ensuring quality assurance of the process. As a result of these weaknesses, FAA is exposed to increased risk that the product will not meet the user and mission requirements, increasing the likelihood that projects that will not meet cost, schedule, or performance goals. Looked at another way, of the 68 practices we evaluated in this process area, FAA projects had 51 practices that were fully or largely implemented and 17 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 66 through 73. Specifically, tables 66 and 67 provide results for VSCS; tables 68 and 69 provide results for ERAM; tables 70 and 71 provide results for ITWS; and tables 72 and 73 provide results for ASDE-X. The purpose of validation is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment. Validation activities are vital to ensuring that the products are suitable for use in their intended operating environment. As shown in figure 12, all four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the validation process and in ensuring quality assurance of the process, as shown in the overview in table 74. As a result of these weaknesses, FAA is exposed to increased risk that the project will not fulfill its intended use, thereby increasing the likelihood that the projects will not meet cost, schedule, or performance goals. Looked at another way, of the 56 practices we evaluated in this process area, FAA projects had 47 practices that were fully or largely implemented and 9 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 75 through 82. Specifically, tables 75 and 76 provide results for VSCS; tables 77 and 78 provide results for ERAM; tables 79 and 80 provide results for ITWS; and tables 81 and 82 provide results for ASDE-X. The purpose of configuration management is to establish and maintain the integrity of work products. This process area includes both the functional processes used to establish and track work product changes and the technical systems used to manage these changes. Through configuration management, accurate status and data are provided to developers, end users, and customers. As shown in figure 13, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed two more practices (see the overview in table 83 for details). Only one of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across some of the projects occurred in the areas of monitoring and controlling the process and in ensuring the quality assurance of the configuration management process, as shown in the overview in table 83. As a result of these weaknesses, FAA is exposed to increased risk that the project teams will not effectively manage their work products, resulting in projects that do not meet cost, schedule, or performance goals. Looked at another way, of the 68 practices we evaluated in this process area, FAA projects had 60 practices that were fully or largely implemented and 8 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 84 through 91. Specifically, tables 84 and 85 provide results for VSCS; tables 86 and 87 provide results for ERAM; tables 88 and 89 provide results for ITWS; and tables 90 and 91 provide results for ASDE-X. The purpose of process and product quality assurance is to provide staff and management with objective insights into processes and associated work products. This process area includes the objective evaluation of project processes and products against approved descriptions and standards. Through process and product quality assurance, the project is able to identify and document noncompliance issues and provide appropriate feedback to project members. As shown in figure 14, only one of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. Weaknesses in the objective evaluation of designated performed processes, work products, and services against the applicable process descriptions, standards, and procedures prevented the projects from achieving level 1. None of the four projects satisfied all criteria for the “managing” capability level (level 2). Table 92 provides an overview of our appraisal results. As shown in the table, while the four projects had differing weaknesses that contributed to this result, common weaknesses across multiple projects occurred in the areas of establishing a plan, providing resources, training people, providing configuration management, identifying stakeholders, monitoring and controlling the process, ensuring quality assurance, and reviewing the status of the quality assurance process with higher level managers. As a result of these weaknesses, FAA is exposed to increased risk that the projects will not effectively implement key management processes, resulting in projects that will not meet cost, schedule, or performance goals, and that will not meet mission needs. Looked at another way, of the 56 practices we evaluated in this process area, FAA projects had 33 practices that were fully or largely implemented and 23 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 93 through 100. Specifically, tables 93 and 94 provide results for VSCS; tables 95 and 96 provide results for ERAM; tables 97 and 98 provide results for ITWS; and tables 99 and 100 provide results for ASDE-X. The purpose of measurement and analysis is to develop and sustain a measurement capability that is used to support management information needs. This process area includes the specification of measures, data collection and storage, analysis techniques, and the reporting of these values. This process allows users to objectively plan and estimate project activities and identify and resolve potential issues. As shown in figure 15, none of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. Weaknesses in managing and storing measurement data, measurement specifications, and analysis results kept the projects from achieving level 1. Further, none of the four projects satisfied all criteria for the “managing” capability level (level 2). As shown in the overview in table 101, while the four projects had differing weaknesses that contributed to this result, common weaknesses across multiple projects occurred in the areas of establishing an organizational policy, establishing a plan, providing resources, assigning responsibility, training people, configuration management, identifying stakeholders, monitoring and controlling the process, ensuring quality assurance, and reviewing status with higher level management of the measurement and analysis process. As a result of these weaknesses, FAA is exposed to increased risk that the projects will not have adequate estimates of work metrics or a sufficient view into actual performance. This increases the likelihood that projects will not meet cost, schedule, or performance goals, and that projects will not meet mission needs. Looked at another way, of the 72 practices we evaluated in this process area, FAA projects had 30 practices that were fully or largely implemented and 42 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 102 through 109. Specifically, tables 102 and 103 provide results for VSCS; tables 104 and 105 provide results for ERAM; tables 106 and 107 provide results for ITWS; and tables 108 and 109 provide results for ASDE-X. The purpose of supplier agreement management is to manage the acquisition of products. This process area involves determining the type of acquisition that will be used for the products acquired, selecting suppliers, establishing, maintaining, and executing agreements, accepting delivery of acquired products, and transitioning acquired products to the project, among other items. For this process area, we did not perform an appraisal for the VSCS or ITWS projects, because these projects were at stages in which supplier agreement management was not applicable. As shown in figure 16, both of the remaining FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. One of the two projects satisfied all criteria for the “managing” capability level (level 2). In not consistently managing this process, FAA is exposed to increased risk that projects will not be performed in accordance with contractual requirements, resulting in projects that will not meet cost, schedule, or performance goals, and systems that will not meet mission needs. Looked at another way, of the 34 practices we evaluated in this process area, FAA projects had 33 practices that were fully or largely implemented and 1 practice that was partially implemented. Table 110 provides an overview of the appraisal results. Additional details on each project’s appraisal results at successive capability levels are provided in tables 111 through 114. Specifically, tables 111 and 112 provide results for ERAM, and tables 113 and 114 provide results for ASDE-X. The purpose of the deployment, transition, and disposal process area is to place a product or service into an operational environment, transfer it to the customer and to the support organization, and deactivate and dispose of the replaced product or dispense with the service. This process area includes the design and coordination of plans and procedures for placement of a product or service into an operational or support environment and bringing it into operational use. It ensures that an effective support capability is in place to manage, maintain, and modify the supplied product or service. It further ensures the successful transfer of the product or service to the customer/stakeholder and the deactivation and disposition of the replaced capability. For this process area, we did not perform an appraisal for the VSCS or ERAM projects, because these projects were at stages in which deployment was not applicable. As shown in figure 17, both of the remaining FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. Neither satisfied all criteria for the “managing” capability level (level 2). As shown in the overview in table 115, while the projects had differing weaknesses that contributed to this result, a common weakness across projects occurred in the area of monitoring and controlling the deployment process. As a result of this weakness, FAA is exposed to increased risk that the projects will not be delivered on time, resulting in projects that will not meet cost, schedule, or performance goals. Looked at another way, of the 32 practices we evaluated in this process area, FAA projects had 28 practices that were fully or largely implemented and 4 practices that were partially implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 116 through 119. Specifically, tables 116 and 117 provide results for ITWS, and tables 118 and 119 provide results for ASDE-X. Since our 1997 report, the Federal Aviation Administration’s (FAA) process improvement initiative has grown tremendously in rigor and scope. In our earlier appraisal, we found that FAA’s performance of key processes was ad hoc and sometimes chaotic, whereas current results show that FAA projects are performing most key practices. However, these process improvement activities are not required throughout the air traffic organizations, and the recurring weaknesses we identified in our project- specific evaluations are due in part to the choices these projects were given in deciding whether to and how to adopt process improvement initiatives. Further, because of a recent reorganization, the new Air Traffic Organization’s commitment to this process improvement initiative is not certain. As a result, FAA is not consistent in its adoption and management of process improvement efforts, so that individual projects’ costs, schedules, and performance remain at risk. Without agencywide adoption of process improvement initiatives, the agency cannot increase the maturity of its organizational capabilities. Over the past several years, FAA has made considerable progress in improving its processes for acquiring and developing software and systems. Acting on our prior recommendations, in 1999, FAA established a centralized process improvement office that reports directly to the Chief Information Officer. This office led the government in an effort to integrate various standards and models into a single maturity model, called the integrated Capability Maturity Model (iCMM). In fact, FAA’s iCMM served as a demonstration for the Software Engineering Institute’s effort to integrate various models into its own Capability Maturity Model Integration (CMMI). The Chief Information Officer’s process improvement office also developed and sponsored iCMM-related training, and by late 2003, it had trained over 7,000 participants. The training offered ranges from overviews on how to use the model to more focused courses in such specific process areas as quality assurance, configuration management, and project management. The office also guides FAA organizations in using the model and leads appraisal teams in evaluating the process maturity of the projects and organizations that adopted the model. In addition to the Chief Information Officer–sponsored process improvement efforts, several of FAA’s business areas, including the business areas with responsibility for air traffic control (ATC) system acquisitions and operations, endorsed and set goals for process improvement activities using the iCMM. As a result, there has been a continuing growth over the years in the number of individual projects and umbrella organizations that adopted process improvement and the iCMM model. Specifically, the number of projects and organizations (which account for multiple projects) undergoing iCMM appraisals grew from 1 project in 1997, to 28 projects and 3 organizations by 2000, to 39 projects and 11 organizations by 2003. These projects and organizations have demonstrated improvements in process maturity. Under the iCMM model, in addition to achieving capability levels in individual process areas, entities can achieve successive maturity levels by demonstrating capabilities in a core set of process areas. FAA process improvement officials reported that by 2000, 10 projects and one organization had achieved iCMM maturity level 2. To date, 14 projects and three organizations have achieved iCMM maturity level 2, and one project and two organizations have achieved iCMM maturity level 3. Additionally, 13 projects and four organizations achieved capability levels 2 or 3 in one or more process areas. Moreover, in internal surveys, the programs and organizations pursuing process improvement have consistently reported enhanced productivity, higher quality, increased ability to predict schedules and resources, higher morale, and better communication and teamwork. These findings are reiterated by the Software Engineering Institute in its recent study of the benefits of using the CMMI model for process improvement. According to that study, organizations that implement such process improvements can achieve better project cost and schedule performance and higher quality products. Specifically, of the 12 cases that the Software Engineering Institute assessed, there were nine examples of cost related benefits, including reductions in the cost to find and fix a defect, and in overall cost savings; eight cases of schedule related benefits, including decreased time needed to complete tasks and increased predictability in meeting schedules; five cases of measurable improvements in quality, mostly related to reducing defects over time; three cases of improvements in customer satisfaction; and three cases showing positive return on investment from their CMMI- based process improvements. Leading organizations have found that in order to achieve advanced system management capabilities and to gain the benefits of more mature processes, an organization needs to institutionalize process improvement. Specifically, to be effective, an organization needs senior-level endorsement of its process improvement initiatives and consistency in the adoption and management of process improvement efforts. In recent years, FAA’s ATC-related organizations have encouraged process improvement through the iCMM model. Specifically, FAA’s acquisition policy calls for continuous process improvement and endorses the use of the iCMM model. Also, the former air traffic organizations set annual goals for improving maturity using the iCMM model in selected projects and process areas. For example, in 1997, the former ATC acquisition organization set a goal of having 11 selected projects achieve iCMM maturity level 2 by 1999 and maturity level 3 by 2001. While the projects did not meet the 1999 goal, several projects achieved level 2 in 2000, and most made improvements in selected process areas. However, FAA did not institutionalize the use of the iCMM model throughout the organization and, as a result, individual projects’ use and application of the model has been voluntary. Individual project teams could determine whether or not they would implement the model and which process areas to work on. In addition, project teams could decide when, if ever, to seek an appraisal of their progress in implementing the model. Because of this voluntary approach, to date less than half of the projects listed in FAA’s system architecture have sought appraisals in at least one process area. Specifically, of the 48 systems listed in FAA’s system architecture, only 18 have sought appraisals. Some of the mission critical systems that have not sought appraisals include an advanced radar system and air traffic information processing system. Another result of this voluntary approach is that individual projects are making uneven progress in core areas. For example, the four projects that we appraised ranged from capability levels 0 to 2 in the risk management process area: in other words, projects varied from performing only part of the basic process, to performing the basic process, to actively managing the process. As another example, all four of the projects we appraised captured some metrics on their performance. However, these metrics varied greatly from project to project in depth, scope, and usefulness. Individual weaknesses in key processes could lead to systems that do not meet the users’ needs, exceed estimated costs, or take longer than expected to complete. While FAA encouraged process improvement in the past, the agency’s current commitment to process improvement in its new Air Traffic Organization is not certain. FAA recently moved its air traffic–related organizations into a single, performance-based organization, the Air Traffic Organization, under the direction of a Chief Operating Officer. The Chief Operating Officer is currently reevaluating all policies and processes, and plans to issue new acquisition guidance in coming months. As a result, the Air Traffic Organization does not currently have a policy that requires organizations and project teams to implement process improvement initiatives such as the iCMM. It also does not have a detailed plan— including goals, metrics, and milestones—for implementing these initiatives throughout the organization, nor does it have a mechanism for enforcing compliance with any requirements—such as taking a project’s capability levels into consideration before approving new investments. Further, because the Air Traffic Organization’s commitment to the iCMM is not yet certain, FAA’s centralized process improvement organization is unable to define a strategy for improving and overseeing process improvement efforts in the Air Traffic Organization. Unless the Chief Operating Officer demonstrates a strong commitment to process improvement and establishes a consistent, institutionalized approach to implementing, enforcing, and evaluating this process improvement, FAA risks taking a major step backwards in its capabilities for acquiring ATC systems and software. That is, FAA may not be able to ensure that critical projects will continue to make progress in improving systems acquisition and development capabilities, and the agency is not likely to proceed to the more advanced capability levels which focus on organizationwide management of processes. Further, FAA may miss out on the benefits that process improvement models offer, such as better managed projects and improved product quality. Should this occur, FAA will continue to be vulnerable to project management problems including cost overruns, schedule delays, and performance shortfalls. The Federal Aviation Administration (FAA) has made considerable progress in implementing processes for managing software acquisitions. Key projects are performing most of the practices needed to reach a basic level of capability in process areas including risk management, project planning, project monitoring and control, and configuration management. However, recurring weaknesses in the areas of verification, quality assurance, and measurement and analysis prevented the projects from achieving a basic level of performance in these areas and from effectively managing these and other process areas. These weaknesses could lead to systems that do not meet the users’ needs, exceed estimated costs, or take longer than expected to complete. Further, because of the recurring weaknesses in measurement and analysis, senior executives may not receive the project status information they need to make sound decisions on major project investments. FAA’s process improvement initiative has matured in recent years, but more can be done to institutionalize improvement efforts. The Chief Information Officer’s centralized process improvement organization has developed an integrated Capability Maturity Model (iCMM) and demonstrated improvements in those using the model, but to date the agency has not ensured that projects and organizational units consistently adopt such process improvements. Specifically, the agency lacks a detailed plan—including goals, metrics, and milestones—for implementing these initiatives throughout the new Air Traffic Organization, and a mechanism for enforcing compliance with any requirements—such as taking a project’s capability level into consideration before approving new investments. With the recent move of FAA’s air traffic control–related organizations into a performance-based organization, the agency has an opportunity to reiterate the value of process improvement and to achieve the benefits of more mature processes. In the coming months, it will be critical for this new organization to demonstrate its commitment to process improvement through its policies, plans, goals, oversight, and enforcement mechanisms. Without such endorsement, the progress that FAA has made in recent years could dissipate. Given the importance of software-intensive systems to FAA’s air traffic control modernization program, we recommend that the Secretary of Transportation direct the FAA Administrator to ensure that the following five actions take place: The four projects that we appraised should take action to fully implement the practices that we identified as not implemented or partially implemented. The new Air Traffic Organization should establish a policy requiring organizations and project teams to implement iCMM or equivalent process improvement initiatives and a plan for implementing iCMM or equivalent process improvement initiatives throughout the organization. This plan should specify a core set of process areas for all projects, clear criteria for when appraisals are warranted, and measurable goals and time frames. The Chief Information Officer’s process improvement office, in consultation with the Air Traffic Organization, should develop a strategy for overseeing all air traffic projects’ progress to successive levels of maturity; this strategy should specify measurable goals and time frames. To enforce process improvement initiatives, FAA investment decision makers should take a project’s capability level in core process areas into consideration before approving new investments in the project. In its oral comments on a draft of this report, Department of Transportation and FAA officials generally concurred with our recommendations, and they indicated that FAA is pleased with the significant progress that it has achieved in improving the processes used to acquire software and systems. Further, these officials noted that FAA has already started implementing changes to address issues identified in the report. They said that progress is evident in both the improved scores, compared with our prior study, and also in the way FAA functions on a day-to-day basis. For example, these officials explained that FAA is now working better as a team because the organization is using cross-organizational teams that effectively share knowledge and best practices for systems acquisition and management. FAA officials also noted that the constructive exchange of information with us was very helpful to them in achieving progress, and they emphasized their desire to maintain a dialog with us to facilitate continued progress. Agency officials also provided technical corrections, which we have incorporated into this report as appropriate.
Since 1981, the Federal Aviation Administration (FAA) has been working to modernize its aging air traffic control (ATC) system. Individual projects have suffered cost increases, schedule delays, and performance shortfalls of large proportions, leading GAO to designate the program a high-risk information technology initiative in 1995. Because the program remains a high risk initiative, GAO was requested to assess FAA's progress in several information technology management areas. This report, one in a series responding to that request, has two objectives: (1) to evaluate FAA's capabilities for developing and acquiring software and systems on its ATC modernization program and (2) to assess the actions FAA has under way to improve these capabilities. FAA has made progress in improving its capabilities for acquiring software-intensive systems, but some areas still need improvement. GAO had previously reported in 1997 that FAA's processes for acquiring software were ad hoc and sometimes chaotic. Focusing on four mission critical air traffic projects, GAO's current review assessed system and software management practices in numerous key areas such as project planning, risk management, and requirements development. GAO found that these projects were generally performing most of the desired practices: of the 900 individual practices evaluated, 83 percent were largely or fully implemented. The projects were generally strong in several areas such as project planning, requirements management, and identifying technical solutions. However, there were recurring weaknesses in the areas of measurement and analysis, quality assurance, and verification. These weaknesses hinder FAA from consistently and effectively managing its mission critical systems and increase the risk of cost overruns, schedule delays, and performance shortfalls. To improve its software and system management capabilities, FAA has undertaken a rigorous process improvement initiative. In response to earlier GAO recommendations, in 1999, FAA established a centralized process improvement office, which has worked to help FAA organizations and projects to improve processes through the use of a standard model, the integrated Capability Maturity Model. This model, which is a broad model that integrates multiple maturity models, is used to assess the maturity of FAA's software and systems capabilities. The projects that have adopted the model have demonstrated growth in the maturity of their processes, and more and more projects have adopted the model. However, the agency does not require the use of this process improvement method. To date, less than half of FAA's major ATC projects have used this method, and the recurring weaknesses we identified in our project-specific evaluations are due in part to the choices these projects were given in deciding whether to and how to adopt this process improvement initiative. Further, as a result of reorganizing its ATC organizations to a performance-based organization, FAA is reconsidering prior policies, and it is not yet clear that process improvement will continue to be a priority. Without a strong senior-level commitment to process improvement and a consistent, institutionalized approach to implementing and evaluating it, FAA cannot ensure that key projects will continue to improve systems acquisition and development capabilities. As a result, FAA will continue to risk the project management problems--including cost overruns, schedule delays, and performance shortfalls--that have plagued past acquisitions.
IRS’s key filing season efforts are processing electronic and paper individual income tax returns and issuing refunds, as well as providing assistance or services to taxpayers. As already noted, processing and assistance were complicated this year by three tax system changes: TETR, the split refund option, and enactment in December 2006 of tax law changes. From January 1 through March 30, 2007, IRS processed 76.8 million returns, about the same number as last year, and issued 68.3 million refunds for $163.4 billion compared to 66.7 million refunds for $154.4 billion at the same time last year. Over 69.3 percent of all refunds were directly deposited into taxpayers’ accounts, up 6.2 percent over the same time last year. Direct deposits are faster and more convenient for taxpayers than mailing paper checks. According to IRS data and officials, performance is comparable to last year. IRS is meeting most of its performance goals, including deposit error rate, which is the percentage of deposits applied in error, such as being posted to the wrong tax year. Groups and organizations we spoke with, including the National Association of Enrolled Agents, the American Institute of Certified Public Accountants, and a large tax preparation company, corroborated IRS’s view that filing season performance is comparable to last year. IRS uses two systems for storing taxpayer account information—the antiquated Master File legacy system and CADE. The latest release of CADE became operational in early March, 2 months behind schedule because of problems identified during testing. IRS had originally planned to post 33 million taxpayer returns to CADE and the remaining 100 million individual returns on the legacy system. However, as a result of the delay, officials expect to post approximately 17 -19 million taxpayer returns to CADE. Although this is significantly less than planned, it is almost two and a half times the approximate 7.4 million taxpayer accounts posted last year on CADE. Taxpayers eligible for a refund this year whose returns are posted to CADE will benefit from CADE’s faster processing, receiving their refunds 1-5 days faster for direct deposit and 4-8 days faster for paper checks than if their return had been processed on the legacy system. The remaining 14 – 16 million returns that were to have been processed on CADE were instead processed by the legacy system and thus did not receive the benefit of faster refunds. The CADE setback may impact IRS’s ability to deliver the expanded functionality of future versions of CADE, thus delaying the transition to the new processing system (discussed further in the BSM section of this testimony). The growth rate for electronic filing is up from the same period last year. As of March 30, over 56.9 million (74.1 percent) of all individual income tax returns were filed electronically. This is up 5.8 percent over the same time last year, and an increase over the previous years’ growth of 3.3 percent. We previously reported that state mandates for electronic filing of state tax returns also encourage electronic filing of both state and federal tax returns and last year, we suggested that Congress consider mandating electronic filing by paid tax preparers meeting criteria such as a threshold for number of returns filed. Last year, electronic filing of federal returns increased 27 percent for the three states (New York, Connecticut, and Utah) with new 2006 mandates. This year, state mandates are likely to continue to show a positive effect on federal electronic filing because, with the addition of West Virginia, 13 states now have state mandates. Compared to processing paper returns, electronic filing reduces IRS’s costs by reducing staff devoted to processing. In 2006, IRS used almost 1,700 (36 percent) fewer staff years for processing paper tax returns than in 1999, shown in figure 1. IRS estimates this saved the agency $78 million in salary, benefits, and overtime in 2006. Electronic filing also improves service to taxpayers. Returns are more accurate because of built-in computer checks and reduced transcription errors (paper returns must be transcribed in IRS’s computers—a process that inevitably introduces errors). Electronic filing also provides faster refunds. Although electronic filing continues to grow, taxpayers’ use of the Free File program continues to decline. The Free File program, accessible through IRS’s Web site, is an alliance of companies that have an agreement with IRS to provide free on-line tax preparation and electronic filing on their Web sites for taxpayers below an adjusted gross income ceiling of $52,000 in 2007. About 95 million (70 percent) of all taxpayers are eligible for free file. Under the agreement, companies are not allowed to offer refund anticipation loans and checks, or other ancillary products, to free file participants. Although IRS has increased its marketing efforts, the agency has not been successful in increasing free file use. As of March 17, 2007, IRS processed about 2.6 million free file returns, which is a decrease of 5.2 percent from the same period last year. While all 19 companies participating in the Free File program allow for TETR requests, only 3 of the 19 companies offer Form 1040 EZ-T requests. We recently reported to this Committee on states’ experience with return preparation and electronic filing on their Web sites. These systems, called I-file, provide taxpayers with another option for preparing and electronically filing their tax returns. To the extent that the I-file systems convert taxpayers from paper to electronic filing, the costs of processing returns are reduced. For the eight states we profiled, I-file benefits and costs were relatively modest. While state I-file systems generated benefits, such as increased electronic filing, the overall benefits were limited by low usage, which ranged from about 1 percent to just over 5 percent of eligible taxpayers. Restrictions on taxpayer eligibility and system features helped keep costs modest. States varied in whether they used contractors to develop and operate the I-file system. For the states we profiled, it is unclear whether benefits were greater than costs, in part, because of the low number of taxpayers who converted from paper to electronic filing. IRS’s potential to realize net cost savings from an I-file system depends on the costs of developing the system and the number of taxpayers converted from paper. IRS’s costs to provide a new I-file service could be higher than states’ for several reasons: (1) the federal tax system is more complex, (2) unlike some states that already had transactional Web sites, IRS would need to develop the capability to receive tax returns on its Web site, and (3) developing an I-file system could further stretch IRS’s capability to manage systems development, an area we have designated high risk since 1995. The key to IRS achieving a net cost savings depends on the number of individuals converted from paper to electronic filing and the savings per return estimated to be $2.36 by IRS. It is uncertain how many of the 58 million taxpayers who filed on paper would convert. The over 13 million taxpayers who self-prepare their returns on a computer but print them out and mail them to IRS are an attractive target for I-file because they already have access to a computer and may be more willing to try I-file. However, IRS’s Free File program, designed to attract similar taxpayers, had low use in 2006, with only 4 million users (about 3 percent of total taxpayers and 4 percent of eligible taxpayers). TETR and split refund volume have been less than IRS projected. Almost 69 percent of individuals who filed individual income tax returns by the end of March have requested TETR, although all who paid the excise tax were eligible for the refund. IRS projected that 10 to 30 million individuals who did not have a tax filing obligation could claim TETR. Approximately 410,000 individuals from this group have asked for a TETR refund (2.8 percent of the 14.5 million IRS expected by this time). As of March 24, fewer than 61,000 individual taxpayers chose to split their refunds into different accounts out of the 44.8 million taxpayers who had their refunds directly deposited. This volume compares to the 3.8 million IRS projected for the filing season. IRS delayed processing a small number of returns claiming tax extender provisions until February 3 to complete changes to its tax processing systems. The number of calls to IRS’s toll-free telephone lines has been less than last year and is significantly less than in 2002 for both automated and live assistance (see table 1). Similar to last year, IRS assistors answered about 40 percent of the total calls, while the rest of the calls were answered by an automated menu of recordings. Taxpayers’ ability to access IRS’s telephone assistors is somewhat less than last year, but IRS is meeting its goals. As shown in table 2, the percentage of taxpayers who attempted to reach an assistor and actually got through and received services—referred to as the level of service— was one percentage point less than the same time period last year. This level of performance is slightly greater than IRS’s fiscal year goal of 82 percent which is the same as last year’s goal. Average speed of answer, which is the length of time taxpayers wait to get their calls answered, is just over 4 minutes, almost 40 percent longer than last year, but is better than IRS’s annual goal of 4.3 minutes. Taxpayer disconnects, which is the rate at which taxpayers waiting to speak with an assistor abandoned their calls to IRS, increased to 12.3 percent to about 1.4 million calls compared to the same time period last year. While IRS disconnects are a smaller percentage of all calls it receives, those disconnects were down from approximately 491,000 at this time last year to 148,000 (a 70 percent decline). Using a statistical sampling process, IRS estimates that the accuracy of telephone assistors’ responses to tax law and account questions to be comparable to the same time period last year. IRS officials noted that there was unprecedented hiring for fiscal year 2007, and while every employee working tax law applications completes a requisite certification process, new employees will be less productive than seasoned employees. IRS has implemented several initiatives, such as targeted monitoring of staff and mini-training sessions, to assist the new hires. IRS officials reported that tax system changes have had minimal impact on telephone operations so far this filing season. TETR-related calls are a small fraction of what IRS projected. Between January 1 and March 10, 2007, IRS expected 7.5 million TETR-related calls, but received about 370,000. This represented 1.8 percent of total calls received by IRS. IRS hired 650 full-time equivalents in fiscal year 2007, with the expectation that those hires would be used to cover anticipated attrition in 2008. Their first assignment was answering TETR telephone calls. They were also trained to handle other accounts calls and paper inventory should the demand for TETR assistance not materialize. IRS anticipated little impact on telephone service from the split refund option and tax provision extenders. For split refunds, IRS anticipated it would receive about 7,000 calls compared to the 70 million total calls it receives each year. IRS did not have projections for tax provision extenders. Use of IRS’s Web site has increased so far this filing season compared to prior years except for downloads of forms and publications and tax law questions. From January 1 through February 28, IRS’s Web site was visited more often and the number of searches increased. The number of downloaded forms and publications has decreased 14 percent over the same period compared to last year. According to IRS officials, it is too early in the filing season to determine why downloads have decreased. In terms of new features, IRS added a state deduction calculator this filing season, which IRS wants to use as a new standard for developing other on line calculators. Web site assistance is important because it is available to taxpayers 24 hours a day and it is less costly to provide than telephone and walk-in assistance. Table 3 IRS Web Site Use, 2006 and 2007 (data are in thousands) In addition to the Free File program, IRS’s Web site offers several important features, such as Where’s My Refund, which allows taxpayers to check on the status of their refunds. This year, the feature allows taxpayers to check on the status of split refunds, and tells the taxpayer if one or more of the deposits were returned from the bank because of an incorrect routing or account number. However, for certain requests, the feature is not useful. For example, IRS stopped some refunds related to TETR requests, but Where’s My Refund informed taxpayers that their refunds had been issued. Further, if taxpayers make a mistake calculating the amount of their refund the feature would indicate that IRS corrected the refund amount, but will not show the new amount. IRS is considering providing more information about taxpayer accounts on its Web site is part of IRS’s strategy to improve taxpayer services at reduce costs. There is further evidence that IRS’s Web site is performing well as these examples show. According to the American Customer Satisfaction Index, IRS’s Web site is scoring above other government agencies, nonprofits, and private sector firms for customer satisfaction (74 for IRS versus 72 for all government agencies surveyed and 71 for all Web sites surveyed). An independent weekly study by Keynote, a company that evaluates Web sites, reported that IRS’s Web site has repeatedly ranked in the top 6 out of 40 government agency Web sites evaluated in terms of average download time. Last year, IRS consistently ranked second for the same time period. Average download time remained about the same for IRS compared to last year, indicating that IRS is not performing worse but that other government agencies are performing better. On the basis of our own searches, we found IRS’s Web site to be readily accessible, easy to navigate, and easy to search. As of March 17, 2007, approximately 2 million taxpayers used IRS’s 401 walk-in sites, which is comparable to the same period last year. Figure 2 shows the trend in walk-in site use for the entire filing season including a slight projected decline in 2007. At walk-in sites, staff provide taxpayers with information about their tax accounts, answer a limited scope of tax law questions about, for example, to income and filing status, and provide limited tax return preparation assistance. As of March 10, 6,700 taxpayers have requested TETR on Form 1040EZ-T at walk-in sites, which is 5.3 percent of the 126,000 individuals IRS expected. IRS officials attribute this year’s projected decline in walk-in use to taxpayers’ increased use of tax preparation software and IRS.gov. This decline has allowed IRS to devote 4 percent fewer full-time equivalents compared to last year for walk-in assistance (down from 187 to 179 full- time equivalents). Volunteer sites, often run by community-based organizations and staffed by volunteers who are trained and certified by IRS, do not offer the range of services provided at walk-in sites. Instead, volunteer sites focus on preparing tax returns primarily for low-income and elderly taxpayers and operate chiefly during the filing season. The number of taxpayers getting return preparation assistance at over 11,000 volunteer sites has increased to approximately 1.3 million, up 8 percent from last year and continuing a trend since 2001. Although no projections have been made for TETR claims, over 33,000 taxpayers have claimed this credit at these locations. We have reported that the shift of taxpayers from walk-in to volunteer sites is important because it has allowed IRS to transfer time-consuming services, such as return preparation, from IRS to other less costly alternatives that can be more convenient for taxpayers. While IRS is collecting better data on the quality of service at walk-in sites, concerns about quality of the data and service remain. According to IRS, it is measuring the accuracy of tax law and accounts assistance. IRS has reported a goal for tax law accuracy, and plans to use data collected for 2007 to set an annual goal for accounts accuracy. While IRS provides return assistance for 125,000 taxpayers, it lacks information on the accuracy of that assistance. For volunteer sites, as of March 2, for a small non-statistical sample, IRS reported a 69 percent accuracy rate for return preparation, compared to its goal of 55 percent. Independent from IRS, but using similar methods, TIGTA showed a 60 percent accuracy rate. TETR is the only one of the three tax changes that created new compliance concerns for IRS (filers could request greater TETR amounts than they are entitled to). The split refund option does not create compliance concerns for IRS since it relates to the accounts into which taxpayers want their refunds deposited rather than to complying with tax provisions. Since the provisions extending the tax laws already existed, IRS anticipates that any compliance concerns for 2006 returns will be the same as for previous years’. IRS developed a plan before the filing season began, to audit suspected TETR overclaims before issuing refunds. IRS’s plan for TETR was consistent with good management practices identified in previous GAO reports. IRS’s plan included appointing an executive, developing an implementation plan for TETR that included standard amounts that individuals could request, developing a compliance plan to select TETR requests for audit, and monitoring and evaluating compliance by using real-time data to adjust TETR compliance efforts. For example, each week, IRS reviews the requests for TETR and selects some for audit and revises the criteria for audit selection as necessary. As of March 24, about 211,000 individuals had requested the actual amount of telephone excise tax paid for a total of $98.8 million. IRS selected about 5 percent of these requests for audit, involving about $29 million. IRS has closed four of the individual audits with the taxpayer agreeing to accept the standard amount, and has not completed the remaining individual audits or any of the business audits. About 189,000 businesses had requested TETR for a total of about $74.7 million. IRS selected about 560 for audit, involving about $5.6 million. IRS reassigned about 77 full-time equivalent staff from discretionary audits and earned income tax credit audits to conduct TETR audits. Additionally, Criminal Investigation has spent 13 full-time equivalent staff on TETR activities in 2007. Many taxpayers choose to pay others to prepare their tax returns rather than prepare their own returns. Sixty-two percent of all the individual tax returns filed for the 2006 filing season used a paid preparer. In most states, anyone can be a paid preparer regardless of education, training, or licensure. However, there are different types of preparers. Paid preparers who hold professional certificates include CPAs and attorneys. Other preparers vary in their backgrounds. Some have extensive training and experience and others do not. In 2003 we reported to this Committee that while many taxpayers who used paid preparers believed they benefited from doing so, some were poorly served. Last year we reported to this Committee on errors made by commercial chain preparers, including the results of undercover visits to 19 locations. In our visits to 19 outlets of several commercial chain preparers, we found that paid preparers made mistakes in every one of our visits, with tax consequences that were sometimes significant. The errors resulted in unwarranted extra refunds of up to almost $2,000 in five instances, while in two cases they cost the taxpayer over $1,500. Some of the most serious problems involved preparers not reporting business income in 10 of 19 cases; not asking about where a child lived or ignoring our answer to the question and, therefore, claiming an ineligible child for the earned income tax credit in 5 out of the 10 applicable cases; failing to take the most advantageous postsecondary education tax benefit in 3 out of the 9 applicable cases; and failing to itemize deductions at all or failing to claim all available deductions in 7 out of the 9 applicable cases. At the time, IRS officials responded that, had our undercover investigators been real taxpayers filing tax returns, many of the preparers would have been subject to penalties for such things as negligence and willful or reckless disregard of tax rules and some may have risen to the level of criminal prosecution for willful preparation of a false or fraudulent return. The taxpayers in these cases would also have been potentially exposed to IRS enforcement action. The limited data did not permit observations about the quality of the work of paid tax preparers in general. Undoubtedly, many paid preparers do their best to provide their clients with tax returns that are both fully compliant with the tax law and cause them to neither overpay nor underpay their federal income taxes. IRS and the paid preparer community have taken some actions as a result of our work. After we provided the results of our 19 visits to IRS, IRS determined that 4 of these cases warranted a Program Action Case. In a Program Action Case, IRS selects 30 tax returns from a preparer and audits them to look for a pattern of compliance problems. IRS officials told us that these audits would begin in April 2007. Other cases were referred to the office responsible for monitoring earned income tax credit compliance, and we have been told that 10 preparers that we visited will receive visits to check for compliance with the due diligence requirements of that program. IRS also referred the cases to the office that monitors electronic filing compliance. We also presented our findings at all six of its nationwide tax forums last year, large educational conferences for the paid preparer community. In addition, we have been told that some tax preparation chains and preparer organizations have incorporated the results of our work into their educational materials. Finally, we recommended that IRS conduct research to determine the extent to which paid preparers live up to their responsibilities to file accurate and complete tax returns based on information they obtain from their customers. IRS officials have described plans to develop data to use to research paid preparer compliance issues, including whether tax preparers who are noncompliant themselves are more likely to prepare client returns that are noncompliant. To date, this research has not been completed. While this may be useful research, we do not believe such research would determine the extent to which paid preparers live up to their responsibilities. Recent suits filed by the Justice Department highlight the obligations of paid preparers. The Justice Department filed suits to stop fraudulent return preparation at more than 125 outlets in four states of one preparation chain for allegedly taking part in preparation scams that led to fraudulent returns. Because they help the majority of taxpayers prepare their returns, paid preparers are a critical quality control checkpoint for the tax system. Due diligence by paid preparers has potential to prevent non-compliance and reduce IRS’s cost and intrusiveness. BSM is critical to supporting IRS’s taxpayer service and enforcement goals and reducing the tax gap. For example, BSM includes projects to allow taxpayers to file and retrieve information electronically and to provide technology solutions to help reduce the backlog of collections cases. Despite progress made in implementing BSM projects and improving modernization management controls and capabilities, significant challenges and serious risks remain, and further program improvements are needed, which IRS is working to address. Over the past year, IRS has made further progress in implementing BSM projects and in meeting cost and schedule commitments, but two key projects experienced significant cost overruns during 2006—CADE and Modernized e-File. During 2006 and the beginning of 2007, IRS deployed additional releases of the following modernized systems that have delivered benefits to taxpayers and the agency: CADE, Modernized e-File, and Filing and Payment Compliance (a tax collection case analysis support system). Each of the five associated project segments that were delivered during 2006 were completed on time or within the targeted 10 percent schedule variance threshold, and two of them were also completed within the targeted 10 percent variance threshold for cost. However, one segment of the Modernized e-File project as well as a segment of the CADE project experienced cost increases of 36 percent and 15 percent, respectively. According to IRS, the cost overrun for Modernized e-File was due in part to upgrading infrastructure to support the electronic filing mandate for large corporations and tax-exempt organizations, which was not in the original projections or scope. IRS has also made significant progress in implementing our prior recommendations and improving its modernization management controls and capabilities, including efforts to institutionalize configuration management procedures and develop an updated modernization vision and strategy and associated 5-year plan to guide information technology investment decisions during fiscal years 2007 through 2011. However, critical controls and capabilities related to requirements development and management and post implementation reviews of deployed BSM projects have not yet been fully implemented. In addition, more work remains to be done by the agency to fully address our prior recommendation of developing a long-term vision and strategy for completing the BSM program, including establishing time frames for consolidating and retiring legacy systems. IRS recognizes this and intends to conduct further analyses and update its vision and strategy to address the full scope of tax administration functions and provide additional details and refinements on the agency’s plans for legacy system dispositions. Future BSM project releases continue to face significant risks and issues, which IRS is taking steps to address. IRS has reported that significant challenges and risks confront its future planned system deliveries. For example, delays in deploying the latest release of CADE to support the current filing season have resulted in continued contention for key resources and will likely impact the design and development of the next two important releases, which are planned to be deployed later this year. The potential for schedule delays, coupled with the reported resource constraints and the expanding complexity of the CADE project, increase the risk of scope problems and the deferral of planned functionality to later releases. Maintaining alignment between the planned releases of CADE and the new Accounts Management Services project is also a key area of concern because of the functional interdependencies. The agency recognizes the potential impact of these project risks and issues on its ability to deliver planned functionality within cost and schedule estimates and, to its credit, has developed mitigation strategies to address them. We will, however, continue to monitor the various risks IRS identifies and the agency’s strategies to address them and will report any concerns. IRS has also made further progress in addressing high-priority BSM program improvement initiatives during the past year, including efforts related to institutionalizing the Modernization Vision and Strategy approach and integrating it with IRS’s capital planning and investment control process, hiring and training 25 entry-level programmers to support development of CADE, developing an electronic filing strategy through 2010, establishing requirements development/management processes and guidance (in response to our prior recommendation), and defining governance structures and processes across all projects. IRS’s high- priority improvement initiatives continue to be an effective means of assessing, prioritizing, and incrementally addressing BSM issues and challenges. However, more work remains for the agency to fully address these issues and challenges. In addition, we recently reported that IRS could improve its reporting of progress in meeting BSM project scope (i.e., functionality) expectations by including a quantitative measure in future expenditure plans. This would help to provide Congress with more complete information on the agency’s performance in implementing BSM project releases. IRS recognizes the value of having such a measure and, in response to our recommendation, is in the process of developing it. Continued compliance research is essential to IRS’s ability to effectively focus its service and compliance efforts, and we have long been a supporter of such research. Well designed compliance research gives IRS and Congress an important measure of taxpayer compliance and it allows IRS to better target enforcement resources towards noncompliant taxpayers. Taxpayers benefit as well, because properly targeted audits mean fewer audits of compliant taxpayers and more confidence by all taxpayers that others are paying their fair share. IRS develops its tax gap estimates by measuring the rate of taxpayer compliance—the degree to which taxpayers complied with their tax obligations fully and on time. That rate is then used, along with other data and assumptions, to estimate the dollar amount of taxes not timely and accurately paid. For instance, IRS most recently estimated a gross tax gap of $345 billion for tax year 2001 and that underreporting of income represented over 80 percent of the gap. IRS developed these estimates using compliance data collected through its 2001 NRP study, which took several years to plan and execute. In that study, IRS reviewed the compliance of a random sample of about 46,000 individual taxpayers and used those results to estimate compliance for the population of all individual taxpayers and identify sources of noncompliance. IRS also used the 2001 NRP results to update its computer models for selecting likely noncompliant tax returns and used that model to select cases beginning with returns filed in 2006. IRS’s fiscal year 2008 budget request states that this improved targeting of audits has increased dollar-per-case yield and reduced “no change” audits of compliant taxpayers. IRS now has a second NRP study underway, this one looking at 5,000 S corporation tax returns filed in 2003 and 2004. IRS’s fiscal year 2008 budget request includes a proposal for a rolling NRP sample of individual taxpayers and a dedicated cadre of examiners to conduct these research audits. Using a rolling sample, IRS plans to replicate the 2001 NRP study by conducting audits of a smaller sample size. At the end of 5 years, IRS would have a comparable set of results to the 2001 study and continue to update the study annually by sampling the same number of taxpayers, dropping off the oldest year in the sample, and adding the new years’ results every year. We support this approach. In previous GAO products, we have observed that doing compliance studies once every few years does not give IRS or others information about what is happening in the intervening years, and that a rolling sample should reduce costs by eliminating the need to plan entirely new studies every few years or more and train examiners to carry them out. Compliance research in this way will also give Congress, IRS, and other stakeholders more frequent and more current information about IRS’s progress towards its long term compliance goals. Mr. Chairman, this concludes my prepared statement. We would be happy to respond to questions you or other members of the Committee may have at this time. For further information regarding this testimony, please contact James R. White, Director, Strategic Issues, at 202-512-9910 or whitej@gao.gov or David A. Powner, Director, Information Technology Management Issues at 202-512-9296 or powenrd@gao.gov. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Joanna Stamatiades, Assistant Director; Amy Dingler; Timothy D. Hopkins; Robyn Howard; Matthew Kalmuk; David L. Lewis; Frederick Lyles; Jennifer McDonald; Signora May; Veronica Mayhand; Paul B. Middleton; Sabine R. Paul; Cheryl Peterson; Neil Pinney; Shellee Soliday; and Tina L. Younger. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Internal Revenue Service's (IRS) tax filing season performance is a key indicator of how well IRS serves taxpayers. This year's filing season was expected to be risky because of tax system changes, including the telephone excise tax refund (TETR) which can be requested by all individuals and entities that paid the excise tax. GAO was asked to describe IRS's service to taxpayers so far this filing season (including the impact of this year's tax systems changes). GAO was also asked to provide updates of previous assessments of the performance of paid tax preparers, IRS's efforts to modernize its information systems, and what IRS is doing to better measure taxpayer compliance. GAO compared IRS's filing season performance to prior years' and goals and based analyses of paid preparers, information systems, and compliance research efforts on recent reports. IRS's interim filing season performance is improved in some areas. The number of individual income tax returns processed to date is comparable to last year, and the number filed electronically is almost 6 percent greater. Taxpayers' ability to reach an IRS telephone assistor was somewhat less than last year, but the accuracy of answers to taxpayers' questions was about the same. Use of IRS's Web site increased, important because it is available 24 hours a day and is less costly than some other types of assistance. However, there have been challenges. Taxpayers' use of the Free File program, which provides free tax preparation and electronic filing through IRS's Web site--is 5.2 percent below last year at this time. Also, the Customer Account Data Engine (CADE), a modern tax return processing system, became operational 2 months behind schedule. IRS still expects to post 17 -19 million taxpayer accounts to CADE, which is about two and a half times more than last year. Tax systems changes have not had a significant effect on filing season performance. For example, IRS has received a fraction of the TETR-related telephone calls it expected to date. Because paid preparers prepared over 62 percent of all individual income tax returns last year, they are a critical quality control for tax administration by helping to prevent noncompliance. Last year, GAO reported to this Committee about errors made by paid preparers. Some of the most serious errors involved not reporting business income and failing to itemize deductions. GAO's limited work last year did not permit observations about the quality of the work of paid tax preparers in general and undoubtedly, many preparers do their best to prepare tax returns that are compliant with tax laws. In response to GAO's report, IRS has scheduled compliance reviews of some preparers. In addition, recent Justice Department suits to stop fraudulent return preparation at more than 125 outlets of one preparation chain for allegedly taking part in tax preparation scams highlight the importance and obligations of paid preparers. Despite progress made in implementing Business Systems Modernization projects, including CADE, and improving modernization management controls and capabilities, significant challenges and serious risks remain. Delays in the latest release of CADE resulted in continued contention for key resources and will likely impact future releases. Also, IRS has more to do to fully address GAO's prior recommendations such as developing a long-term strategy that would include timeframes for retiring legacy computer systems. GAO has long supported IRS's research to better understand taxpayers' compliance. IRS's fiscal year 2008 budget request includes a proposal for annual research instead of larger but intermittent efforts. GAO considers this to be a good approach because it will allow compliance data to be continually refreshed and should reduce costs by eliminating the need to plan new studies every few years.
To ensure the safety, security, and reliability of the nation’s nuclear weapons stockpile, NNSA relies on contractors who manage and operate government-owned laboratories, production plants, and a test site. NNSA’s eight enterprise sites each perform a different function, all collectively working toward fulfilling NNSA’s nuclear weapons-related mission. Figure 1 shows the locations of the sites and describes their functions. To provide support and oversight, NNSA locates between about 30 and 110 NNSA staff in a site office at each facility, and also draws on the resources of NNSA staff in headquarters and the Albuquerque complex. According to NNSA officials, this support and oversight requires that some NNSA staff have critical skills comparable to the contractors they support and oversee. For example, NNSA staff may need technical knowledge and expertise to accept and review deliverables from M&O contracts and, when presented with options, be able to determine how best to proceed to meet contract goals, mission, and objectives. They may also need skills related to the safe operation of sensitive defense nuclear facilities such as expertise in occupational safety and fire safety. For example, according to NNSA officials at the Livermore Site Office, most of the staff in critical skills positions there are focused on ensuring safety at the laboratory’s nuclear facilities. Maintaining critical skills within its workforce is not a challenge unique to NNSA. Every 2 years, we provide Congress with an update on GAO’s high-risk program, under which GAO designates certain government operations as high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement, or their need for transformation to address economy, efficiency, or effectiveness challenges. In 2001, GAO designated strategic human capital management across the entire federal government as a high-risk area, in part because critical skill gaps could undermine agencies’ abilities to accomplish their missions. We have also reported in the past that NNSA and its predecessor organizations’ record of inadequate management and oversight of contractors has left the government vulnerable to fraud, waste, abuse, and mismanagement. Contract management at DOE has been on GAO’s high risk list since 1990, the first year our high-risk list was published. Progress has been made, but NNSA and DOE’s Office of Environmental Management remain on our high-risk list. As of 2011, our most recent update of the high-risk list, significant steps had been taken to address some of the federal government’s strategic human capital challenges. Strategic human capital management was designated a high-risk area 10 years earlier governmentwide and remains on the high-risk list because of a need for all federal agencies to address current and emerging critical skills gaps that are or could undermine agencies’ abilities to meet their vital missions. Specifically, across the federal government, we reported that resolving remaining high-risk human capital challenges will require three categories of actions: Planning. Agencies’ workforce plans must define the root causes of skills gaps, identify effective solutions to skills shortages, and provide the steps necessary to implement solutions. Implementation. Agencies’ recruitment, hiring, and development strategies must be responsive to changing applicant and workforce needs and expectations and also show the capacity to define and implement corrective measures to narrow skill shortages. Measurement and evaluation. Agencies need to measure the effects of key initiatives to address critical skills gaps, evaluate the performance of those initiatives, and make appropriate adjustments. NNSA and its M&O contractors have developed and implemented multifaceted strategies to recruit, develop, and retain both the federal and contractor workforces needed to preserve critical capabilities in the enterprise. NNSA focuses on attracting early career hires with competitive pay and development opportunities, and the agency is reassessing future enterprisewide workforce needs. M&O contractors’ strategies vary from site to site, but each site focuses on maintaining competitive compensation packages. NNSA takes various steps to recruit, develop, and retain a federal workforce with the necessary critical skills. NNSA’s recruitment strategies are focused primarily on students and recent graduates in science and engineering programs. NNSA generally relies on two key programs to develop its critically skilled workforce––one that identifies needs and another that identifies the qualifications necessary to meet them. Its retention efforts focus on competitive pay, flexible schedules, and development opportunities. NNSA is also undertaking a comprehensive reassessment to ascertain future federal workforce requirements. NNSA has several programs targeted toward recruiting students and recent graduates, primarily in science and engineering fields. NNSA began these programs within the past 7 years as a means of succession NNSA’s programs focused on recruiting students include the planning.following: The Student Temporary Employment Program is a summer internship program for high school through graduate students of any discipline. Students participating in this program receive a salary while working at NNSA. The Student Career Experience Program is a program for graduate students in science, engineering, and other fields. This program aims to persuade skilled graduates to pursue careers in NNSA. Participants work for NNSA full-time during school breaks and part-time the rest of the year. These positions can be converted to full-time competitive appointments when participants receive their degrees. The Minority Serving Institutions Program aims to strengthen the diversity of the applicant pool by exposing younger minority students to technical fields and NNSA work early in their educational careers. This program focuses on students beginning in junior high school and continues through college entry and has cooperative agreements to enhance science, technology, engineering, and mathematics curricula at all levels at 29 minority-serving institutions. Since the program’s inception in 2007, 167 minority students have participated in hands-on research at NNSA site offices and laboratories. NNSA’s key program for recruitment of recent graduates is its Future Leaders Program. NNSA established the program in 2005 to recruit recent U.S. citizen graduates of bachelor’s and master’s programs, primarily in engineering and science. The Future Leaders Program is a 2- year development program that requires participants to complete classroom and on-the-job training, in addition to developmental assignments outside their home office. NNSA hires about 30 recent graduates into this program each year. Applicants are hired into the program offices where they will be permanently placed and are selected based on each program office’s skills needs. According to NNSA officials, approximately two-thirds of the 175 program participants hired from 2005 through 2010 have engineering and science backgrounds that enable them to develop the technical critical skills NNSA needs to provide support and oversight of contractors. As their careers advance, some program participants are expected to become more focused on developing deep expertise in a particular technical area, and others will gravitate toward more senior management and leadership positions. NNSA officials told us they consider the program very successful because nearly 90 percent of all those hired into the program since 2005 remained at NNSA. NNSA relies primarily on two programs to develop a federal workforce with the requisite critical skills––the Federal Technical Capability Program and the Technical Qualification Program (TQP). NNSA employees’ critical skills generally fall into two broad categories: (1) technical skills related to managing the safe operation of nuclear facilities, and (2) technical knowledge and expertise necessary to accept and review contract deliverables. To ensure that it has sufficient numbers of federal employees with critical skills to manage the safe operation of nuclear facilities, NNSA relies on the Federal Technical Capability Program––a DOE-wide effort to define requirements and responsibilities for meeting the department’s commitment for recruiting, developing, and retaining the technically competent workforce necessary to achieve this mission. To implement the goals of the Federal Technical Capability Program at the site level, NNSA senior managers conduct annual workforce analyses and develop staffing plans that identify critical technical capabilities and positions that ensure the safe operation of nuclear facilities. For example, NNSA relies on senior managers to identify the fire safety needs for the National Ignition Facility, a stadium size research facility at Lawrence Livermore National Laboratory and to identify how many fire protection engineers are required to meet these needs. To help meet these goals, DOE established the TQP, which sets technical qualification requirements for NNSA positions related to the safe operation of nuclear facilities and tracks federal employees’ progress in meeting these qualifications. More specifically, the TQP documents how NNSA: tailors qualification standards for them, establishes time and duty limitations for qualification, describes the process to identify learning activities to achieve competency for the specific job duties, and establishes methods for evaluating qualification. NNSA officials told us that only federal employees in positions related to managing the safe operation of nuclear facilities are required to participate in the TQP. However, NNSA managers may also subject employees who accept and review contract deliverables to TQP requirements to help ensure that they have the skills necessary to evaluate technical criteria of contract deliverables. Beyond the TQP, according to NNSA officials, human capital managers rely on annual human capital needs assessments to inform subsequent recruitment and hiring efforts to ensure the requisite mix of skills is present in the federal workforce. These assessments consider attrition and other demographic data, succession planning, and education and experience requirements. For example, NNSA officials told us that in 2011 its Office of Human Capital Management surveyed NNSA programs to identify needs for the Future Leaders Program. As part of this survey, they analyzed attrition in the federal workforce and used the information to assist in decisions about how many engineers to hire across the enterprise through the Future Leaders Program. Recruitment efforts in 2012 will focus on finding replacements for these engineers. NNSA’s retention strategies focus on offering new staff competitive pay, flexible schedules, and career development opportunities. Competitive pay. According to NNSA officials, NNSA’s retention efforts place a high priority on preserving the agency’s capacity to offer competitive compensation. For example, for relatively new hires, such as those hired through the Future Leaders Program, NNSA can sometimes offer as much as $6,000 in lump sum hiring bonuses and up to $10,000 in student loan repayment in return for signing a service agreement. In some cases, NNSA is also able to offer retention bonuses of up to 25 percent of annual salary to employees that might otherwise leave federal service. In addition, NNSA has the flexibility to offer particularly desirable applicants higher starting salaries and reward top performers with higher pay. For more senior employees, according to NNSA officials, DOE and NNSA sought, and were granted, authorities by Congress to offer higher pay to staff primarily in certain engineering and science fields. Specifically, to help it retain more experienced competitive service employees with critical skills––that is, employees in regular civil service positions— Congress granted exceptions to normal hiring regulations, including salary caps, under three excepted service authorities. First, under the Department of Energy Organization Act, the Secretary of Energy is granted special excepted service hiring authorities to hire up to 200 highly skilled scientific, engineering, professional, and administrative individuals to upgrade the department’s technical and professional capabilities. NNSA can use this authority in some cases to hire senior-level employees from outside the government or difficult-to-hire administrative staff. According to NNSA officials, there are presently 50 such individuals employed by NNSA. Second, under the National Defense Authorization Act, the Secretary of Energy is also granted special excepted service hiring authorities to hire up to 200 highly skilled individuals––typically scientists, technicians and engineers with skills related to and necessary for the operation of nuclear facilities. According to NNSA officials, there are currently about 100 such individuals currently employed by NNSA. Third, under the National Nuclear Security Administration Act, NNSA may hire up to 300 highly qualified scientists, engineers, and other technically skilled workers needed to support the missions of NNSA under similar excepted service hiring authorities. According to NNSA officials, NNSA has used this authority to hire and employ about 280 highly skilled NNSA officials told us that all of these flexibilities are useful individuals.and help NNSA compete with the Nuclear Regulatory Commission and national laboratories. Flexible schedules. NNSA’s retention efforts also include a flexible schedule program that gives employees the opportunity to work a nontraditional schedule or vary their work hours from day to day. For example, employees with school-aged children may opt to work more than 8 hours some days and fewer hours other days in order to accommodate school early release days. Development opportunities. NNSA offers some employees the opportunity to undertake career development opportunities such as rotational assignments and details. Integral parts of the Future Leaders Program are 30-day local rotational assignments and 60-day headquarters or field assignments away from their home locations. For example, a Future Leaders Program participant based in NNSA’s Washington headquarters who is interested in a program run by Sandia National Laboratories in Albuquerque might be assigned for 60 days to related work at NNSA’s Sandia Site Office or Albuquerque complex. In addition, NNSA has implemented a program called the In-Teach Program, which focuses on knowledge preservation and transfer by providing funding to train highly skilled senior employees to become more adept at transferring knowledge and skills to less skilled more junior employees. NNSA is currently undertaking a comprehensive reassessment and analysis of the staffing requirements for its federal workforce through 2016 in headquarters and field locations. NNSA officials told us that the reassessment is needed for strategic planning purposes and will improve NNSA’s efforts to ensure that its federal workforce has the skills necessary to carry out its missions, including technical, support, and oversight capabilities. The reassessment includes the following four phases: Describing and identifying organizational core competencies, and the workforce required for NNSA’s future Analyzing the current workforce and gaps related to requirements for Developing a plan to close gaps between future requirements and the Developing and implementing a workforce management system which is integrated with legacy Department of Energy human capital information technology systems NNSA officials told us they expect the reassessment and resulting report to be complete in fiscal year 2013. M&O contractors’ recruitment, development, and retention strategies are site-specific. Generally, their recruitment efforts vary by the type of employee needed––particularly, whether the position requires an advanced degree. Their development efforts vary in approach but are also site specific and face some challenges––particularly in preserving underground nuclear testing skills. Their retention efforts focus on maintaining competitive total compensation packages––salaries and benefits––but their strategies to mitigate attrition vary from site to site. NNSA’s M&O contractors have developed and implemented site-specific strategies to recruit, develop, and retain the workforces needed to preserve critical capabilities throughout the enterprise and accomplish NNSA’s mission. Accordingly, contractors have typically developed site- specific workforce planning systems that enable them to identify the kinds of candidates they need to recruit, develop, and retain in order to align projected nuclear weapons-related work and budget resources. Using these workforce planning systems, site managers can anticipate the nuclear weapons-related work NNSA has contracted for, how it will be funded, how many staff are required, and what skills will be needed, and can avoid potential shortages in staff or skills. For example, in the course of their 2- and 5-year planning processes, managers at Sandia National Laboratories use a four-step workforce planning tool, the Workforce Acquisition Project, to anticipate critical skills hiring needs based on the expected lab-wide business outlook and attrition. This early assessment of critical skills requirements ensures that the contractor has sufficient time to identify and recruit new staff as necessary and give new staff time––generally 2 to 5 or more years––to develop their skills. M&O contractors’ strategies for recruitment vary according to the kinds of employees they need to hire—in particular, whether the position requires an advanced degree. For example, the weapons laboratories, which include Sandia, Los Alamos, and Lawrence Livermore National Laboratories, typically require highly skilled candidates with advanced degrees to replace physicists, engineers, and other experts who retire or leave for other jobs. M&O contractors at weapons laboratories thus focus their recruitment efforts on students and recent graduates of the nation’s leading graduate schools in science, engineering, and mathematics. Efforts to attract candidates begin with summer internship programs and continue with support for post-doctoral fellowships and direct offers of employment. Officials at the Lawrence Livermore National Laboratory told us that, in addition to these efforts to recruit students and recent graduates, they also recruit at the midcareer or higher level at professional meetings in science and technology fields and through the cooperative relationships with American universities and industries to broaden the prospective employee pool and enhance the intellectual vitality of its existing workforce. According to M&O contractor officials, the critical skills needs at other enterprise production plants, such as the Y-12 National Security Complex and Pantex Plant, differ from those at the weapons laboratories, and their recruiting strategies reflect these differences. Unlike the weapons laboratories, production plants generally do not require candidates with advanced degrees; rather candidates typically need a bachelor’s degree or, in the case of manufacturing and skilled craft positions, an associate’s degree or skills in advanced manufacturing techniques. As such, M&O contractors at production plants can generally recruit regionally for the staff they need and have less need to recruit nationally. For example, M&O contractor officials at Y-12 told us that they recruit predominately bachelor’s level candidates––predominately engineers––from universities within a 300-mile radius of Oak Ridge, particularly from the University of Tennessee in nearby Knoxville. Production plants are also generally well- established within their communities and focus most of their recruitment efforts for skilled manufacturing positions on the local area. For example, M&O contractor officials at the Pantex Plant told us that they have developed strong ties with local community colleges over the years and typically look for high school graduates and community college students and graduates with some specialized, skilled training or work experience. Nevertheless, according to Pantex officials, they have also taken advantage of opportunities to recruit from outside the local areas, seizing opportunities to recruit automotive workers with machine tool experience and highly skilled plant workers from another nuclear security enterprise production facility, the Savannah River Site, in the wake a reduction in force. M&O contractors told us their strategies for development are often linked to recruitment because appealing development opportunities can encourage candidates to accept job offers. As with strategies for recruitment, those for development are tailored to the specific needs of each site’s workforce, but many of the M&O contracting officials we spoke with cited continuing educational opportunities and the option to move within the organization as appealing development opportunities. For example, M&O contractor officials at Sandia National Laboratories told us that offering continuous training opportunities and the opportunity to move to different jobs within different components of the laboratory was very appealing to entry-level hires. Accordingly, Sandia’s Corporate Learning and Professional Development Programs offer various training opportunities. Sandia officials told us that these opportunities help employees keep skills current, provide additional educational opportunities, and help laboratory management anticipate critical skills needs in the workforce. As part of these programs employees can also take training offered by Sandia’s technical and compliance training group, which is focused on skills currently in demand at Sandia, or participate in university graduate degree programs, which Sandia will pay for. The Lawrence Livermore National Laboratory’s Education Assistance Program provides up to $50,000 in tuition assistance for coursework toward a higher degree. Production plants also offer continuous learning and development opportunities. For example, the M&O contractor officials at the Kansas City Plant told us employees are encouraged to pursue higher education in areas where the plant has a skills gap. In such cases, the contractor will pay tuition and, if the employee attends school full-time, continue to pay 70 percent of the employee’s base salary. Kansas City Plant employees may also participate in developmental programs at the entry or midcareer levels that allow participants to undertake three rotational assignments to support their targeted and tailored personal development plans. In addition, the Pantex plant offers employees support for technical training opportunities with local colleges. The Nevada National Security Site also offers a number of developmental opportunities to its staff, including a voluntary mentoring program for all employees, assistance with career planning, various training and certification programs, and attendance at seminars and conferences. M&O contractor employees also have access to online courses and books as well as CD-based training sessions on a wide variety of topics, including supervision, management and leadership; computer skills and certifications; communication; and mentoring. M&O contractors told us that, in their development efforts, they rely on knowledge preservation and transfer programs, including recording the performance of high-skill critical tasks, formal classroom training, on-the- job training, and mentoring programs to preserve critical capabilities in the nuclear security enterprise. Knowledge preservation programs are focused on the physical preservation or recording of critical information and knowledge––typically in paper records, microfilm and microfiche, and in various audio and video media. Knowledge transfer programs seek to ensure that experienced laboratory or production plant employees successfully pass on the knowledge to replicate critical tasks to newer employees. Knowledge preservation. All M&O contractors at nuclear security enterprise sites have taken steps to record critical knowledge. These knowledge preservation programs are broadly similar from site to site, whether laboratory or production plant. For example, Los Alamos National Laboratory officials report that their archives house information on weapons designs and experiments dating to the inception of the laboratory. This information is contained in documents and other media such as film, audio and videotape, drawings, and photographs. The information housed in the archives is still relevant and is used by researchers across the enterprise. It may also be used outside the enterprise by, for example, documentary filmmakers and occupational health researchers. More recently, in the 2000s, Los Alamos gathered and developed critical information in the course of the Reliable Replacement Warhead Program—a program that explored the possibility of developing new nuclear weapons designs. Los Alamos engineers and scientists documented all decisions in the Reliable Replacement Warhead design process through written and video documentation. The other weapons laboratories have also invested in electronic records and videos to preserve critical knowledge. According to Lawrence Livermore National Laboratory officials, Livermore maintains an extensive electronic archive of papers and reports, as well as tutorial lectures by experienced weaponeers on key areas of weapons knowledge. Sandia National Laboratories also has its Knowledge Management Streaming Assets Library program, which has recorded about 1,500 hours of classified exit interviews with retiring weaponeers and made them available to current staff. M&O contractors at the weapons production plants report broadly similar efforts to preserve critical knowledge at their sites. For example, the Y-12 National Security Complex has the Knowledge Preservation Program (KPP). Similar to Sandia National Laboratories’ knowledge preservation efforts, the KPP films retiring employees as they do their work and interviews them on how they do it, then archives the videos in an electronically searchable format. As employees approach retirement, a KPP video and interview is part of the retirement checklist. These videos are evaluated for accuracy by an expert before they are entered into the KPP system. Y-12 officials told us that other NNSA sites have created videos or archives for knowledge preservation but they are not as easily accessible. M&O contractors at the Pantex Plant have undertaken similar efforts, including creating and maintaining what Pantex officials call “picture books” on weapons assembly, and interviewing experienced Pantex workers to capture their knowledge in areas such as high explosives and making these interviews available as a training tool. According to M&O contractor officials at the Nevada National Security Site, however, efforts to preserve critical knowledge regarding underground nuclear testing have faced challenges, as they have been limited and sporadic. These efforts have been complicated by two factors: (1) the need to protect vital national security information against unauthorized disclosure led to a practice of not keeping written documentation about the specifics of critical tasks; and (2) significant numbers of employees were laid off in the mid-1990s after U.S. underground nuclear testing ended. Until 2007, NNSA maintained a program that undertook substantial efforts to capture and record critical knowledge possessed by these workers, but NNSA and M&O contractor officials said these efforts were not comprehensive or systematic, and funding was discontinued. Knowledge transfer. M&O contractors at the weapons laboratories rely on a range of approaches to transfer knowledge, while there is more similarity among the knowledge transfer programs of M&O contractors at production plants. Specifically, each of the three weapons laboratories uses a combination of classroom training, on-the-job training, and mentoring relationships to transfer critical nuclear weapons design information, but with varying reliance on each of these three components. For example, at one end of the spectrum, Sandia National Laboratories relies most heavily on a classroom-focused curriculum––its highly regarded Weapons Intern Program. According to Sandia officials, the 11- month Weapons Intern Program succeeds in transferring such knowledge and experience through a blended learning environment, consisting of live and multimedia-based classroom instruction, individual and team research projects, hands-on activities, and off-site facility and operations tours and briefings. The live instruction is provided through a large contingent of subject matter experts in the various weapon technology, design, evaluation, production, operations, policy, and management areas. Lawrence Livermore National Laboratory is at the other end of the spectrum, relying mostly on mentoring programs and on-the-job-training opportunities to transfer advanced nuclear weapon design skills to new staff. According to Lawrence Livermore officials, their approach to developing critical skills expertise is to embed new employees into work groups directly engaged in important work, with an experienced employee acting as a mentor. As new employees gain skills and experience and demonstrate their readiness, they are assigned tasks of increasing levels of complexity and responsibility. Laboratory officials stated that, in their experience, employees supporting the weapons program must be exposed to years of work in the field to acquire the needed knowledge and judgment to be a fully qualified weaponeer. An extensive electronic archive of papers and reports is available, as well as tutorial lectures on key areas of weapons knowledge, but Livermore officials told us there is no substitute for hands-on experience with weapons. Los Alamos National Laboratory’s approach is not as classroom-focused as Sandia’s program, nor is it as dependent on mentoring relationships and on-the-job training as Lawrence Livermore’s. Specifically, Los Alamos officials told us that critical skills are being transferred through a combination of formal training opportunities, mentoring, and archiving programs. For example, the TITANS program, referred to informally as “nuclear design university” is a 3-year, credential-granting program with 2 years of coursework and 1 year of thesis research and writing under the direction of a mentor. Thesis projects can either be focused on learning new modeling techniques or on mastering the simulation of above-ground experiments. For example, one knowledge transfer technique is to reanalyze old data from actual experiments to teach newer employees to use modern simulation techniques to estimate the results of real testing. The results of the student’s analysis are then compared to actual testing data. Los Alamos officials told us this practice is a very effective method for examining how well the student has mastered the use of computer simulation techniques—a very critical skill when live nuclear testing is not an option. Knowledge transfer at weapons production facilities is focused more on having employees demonstrate that they can replicate specific tasks. For example, M&O contractor officials at the Pantex Plant told us that they are very aggressively taking steps to ensure that younger workers can carry on performing some of the same tasks after older workers retire. The centerpiece of the Pantex effort is the Retiree Corps. Through this program, recent retirees are brought back on a part-time basis—for a maximum of 800 hours a year, an average of a little less than 2 days a week—specifically to teach current Pantex employees how to do their high-skill critical task. Retirees host talks and seminars, provide a narrative to schematics of detailed procedures and photos, and are recorded and/or videotaped explaining their tasks. Pantex officials told us they verify the knowledge transfer by requiring the trainee to demonstrate that he or she can replicate the task. Again, however, the M&O contractor at the Nevada National Security Site faces some challenges. The site has an active on-the-job training program and specialized training on specific diagnostic and recording techniques relevant to underground nuclear testing. However, according to M&O contractors, funding for this program has been minimal for several years. In addition, according to Nevada National Security Site M&O contractor officials, it is challenging to preserve some of the critical skills necessary for underground nuclear testing when there is no opportunity to provide any direct experience with such testing. NNSA officials and M&O contractors told us that maintaining competitive total compensation packages—that is, combined salary and benefits—is crucial for achieving their strategies for recruiting, developing, and retaining the workforce with the skills necessary to sustain critical capabilities in the nuclear security enterprise, but that other factors are also useful in both attracting desirable candidates and mitigating attrition. For example, M&O contractor officials at Sandia National Laboratories told us that offering the highest salary is not required to attract top talent, but offering pay comparable to peer institutions is a necessity. Accordingly, NNSA officials work very closely with M&O contractors to ensure that contractor compensation remains comparable to other enterprise laboratories and plants, private laboratories, companies, and other government entities that recruit and try to retain similar talent. M&O contractors undertake compensation studies every year and comprehensive benefits evaluation surveys every 2 years. This compensation study is done using survey data from recognized regional, national, and international surveys as needed. Based on these data, M&O contractors may seek permission from NNSA to pay certain employees more by submitting a special request in the Compensation Increase Plan. If the plan is accepted by NNSA, salaries will be increased. In addition to raising salaries for M&O contractors to keep them competitive, NNSA will also authorize and pay for sign-on and retention bonuses, significant monetary recognition and awards programs, and special compensation packages for especially difficult-to-recruit and retain critical skills specialties. The biennial benefits evaluation compares the value of M&O contractor workforce benefits to 15 peer competitors for the same talent. According to DOE policy,percent of the value of peer institutions’ benefits. M&O contractors may offer benefits up to 105 NNSA officials and M&O contractors told us that other factors are useful in both attracting desirable candidates and mitigating attrition. For example, the weapons laboratories in particular can offer scientists and engineers access to state-of-the-art equipment—such as the National Ignition Facility at Lawrence Livermore National Laboratory—and the opportunity to do cutting edge research that cannot be done outside the enterprise due to national security restrictions. Similarly, for the three production plants located in relatively remote, nonmetropolitan locations—particularly Pantex, Y-12, and the Savannah River Site— attrition rates are lower among candidates with ties to the local area. For example, M&O contractor officials at Y-12 told us that they recruit locally to the extent possible, because, historically, employees from nearby communities have been less likely to seek opportunities that would require them to relocate. These officials added that the local community is familiar with Y-12, and that about 35 percent of new applicants are employee referrals. M&O contractors have broadly similar retention initiatives. While M&O officials at all sites in the enterprise told us that competitive total compensation packages—that is, salary and benefits—are ultimately the most important factors in employee retention, sites also typically offer a similar mix of other programs designed to encourage retention, such as work/life balance programs, flexible work schedules, and some form of continuous education and learning programs. In addition, some of the M&O contractors we spoke with told us that, to the extent they are able, they try to accommodate the desires and expectations of more recently hired employees for opportunities for faster advancement, meaningful and challenging assignments, and recognition of high performance. To assess the effectiveness of its strategies for recruiting, developing, and retaining the NNSA staff and M&O contractors needed to preserve critical skills in the nuclear security enterprise, NNSA monitors key human capital metrics, including the length of time to hire employees and attrition. To assess the effectiveness of its M&O contractors’ strategies, NNSA uses M&O contractors’ data to monitor key human capital metrics, but these metrics do not have standardized definitions. To assess the effectiveness of its strategies for recruiting, developing, and retaining the federal workforce with the requisite critical skills to support and oversee M&O contractors, NNSA focuses on monitoring two key metrics—the length of time it takes them to hire an employee and its attrition rates—and tracks employees’ progress toward completing the required training and certifications through the TQP. NNSA officials told us the length of time it takes them to hire an employee is a useful metric because it is an indicator of the efficiency of their human capital management processes. Attrition rates, especially for those leaving NNSA for reasons other than retirement are a valid indicator of the relative attractiveness of NNSA employment. Increases in the time it takes to hire employees and increases in the attrition rate would indicate a potential problem that would eventually make it more difficult for NNSA to attract and retain the workforce it needs to achieve its mission. Overall responsibility for maintaining a federal workforce with the necessary critical skills to carry out NNSA’s mission resides in NNSA’s Office of Human Capital Management, located at NNSA headquarters, and its site offices are also responsible for closely monitoring changes in their workforces and keeping NNSA headquarters informed of any changes. They also have direct responsibility for making sure that site office employees are maintaining the technical certifications required to perform their duties. NNSA’s Office of Human Capital Management Services, located at the Albuquerque complex, may also assist both headquarters and site office staff in monitoring these issues. To assess the effectiveness of its M&O contractors’ strategies for recruiting, developing, and retaining their workforces, NNSA monitors key human capital metrics using data the contractors collect. M&O contractors assess key human capital metrics, but these metrics do not have standardized definitions. NNSA generally gives M&O contractors the primary responsibility for identifying their workforce needs and taking the necessary steps to ensure they maintain workforces with the skills to meet the responsibilities outlined in their M&O contracts with NNSA. Accordingly, NNSA officials told us that, in 2005, they discontinued a requirement for M&O contractors to report on efforts to recruit and retain staff with critical skills, as well as more formal reporting requirements for workforce and succession planning. More specifically, according to NNSA officials, M&O contractors expect NNSA to instruct them on what they are required to do and what the contract deliverable and timeline is, but expect to be able to determine on their own how to meet their contractual obligations, including how to recruit, develop, and retain staff with the requisite critical skills. Nonetheless, M&O contractors collect data on key human capital metrics for their workforces and provide these data to NNSA directly from their own human resource data systems. All contractors also undertake some level of workforce and succession planning, although there are no formal or specific requirements directing how they do so. According to NNSA officials, these metrics vary from site to site, but generally provide the same key information, including acceptance rates for offers of employment, which are benchmarked on a site-specific basis but are typically around 80 percent; attrition rates, both for retirement and non-retirement reasons, which are also benchmarked on a site-specific basis; pay comparability—whether salaries are competitive with peer benefits comparability––whether benefits are competitive with peer ability to fill a critical skills position within a certain number of days–– usually 48 to 90 days. According to NNSA officials, these five metrics are tracked very closely by M&O contractors at all sites, and attrition, employment acceptance rates, and pay and benefits comparability data are systematically collected at regular intervals enterprisewide. If any of these metrics indicate a problem in retention, for example, NNSA officials told us, action would be taken to address it. For example, these metrics were monitored very closely by NNSA and the M&O contractors at Los Alamos National Laboratory and Lawrence Livermore National Laboratory during their 2006 transition to a new M&O contract with less generous retirement and medical benefits. There were concerns that this change could lead to a spike in attrition among highly skilled staff that could in turn lead to difficulties in the laboratories meeting deadlines on project deliverables. Similarly, NNSA is now carefully watching the same metrics at Sandia National Laboratories because the M&O contractor substantially cut future retirement benefits that took effect for those employees who remained at the lab beyond the end of 2011. If the metrics indicate greater attrition than expected, the laboratory could adjust its recruiting strategies to hire more staff. NNSA also maintains close, cooperative working relationships between its federal and contractor workforces. Much of NNSA’s expertise in M&O contractor human capital issues resides in its Contractor Human Resources Division (CHRD) at its Albuquerque complex. According to NNSA officials, the work of CHRD is both critical and central to how NNSA manages human capital issues with the M&O contractors. CHRD staff are in day-to-day contact with the M&O contractors on a wide range of human capital issues, including those related to recruitment, development, and retention of employees with critical skills. For example, if an M&O contractor is having difficulty recruiting staff with particular critical skills, it can submit a supplementary Compensation Increase Plan to the NNSA site office for authorization to offer candidates higher salaries. When this occurs, NNSA headquarters and the relevant site office largely rely on CHRD to review, analyze, and make recommendations to senior management on whether to accept, amend, or reject such a request. Because most sites do not have full-time human capital subject matter expertise in residence, NNSA site office officials in particular rely heavily on CHRD both for such expertise and to monitor M&O contractors’ human capital performance metrics at all nuclear security enterprise sites. For example, officials at the Sandia Site Office told us that there is no full-time subject matter expert on human capital issues at the site office, so the office relies heavily on a CHRD staff member to inform the office’s oversight of Sandia National Laboratories on this issue. According to NNSA officials, if NNSA had concerns about what a contractor was doing or had doubts that the contractor was going to be able to continue meeting its contractual obligations because of weaknesses in its recruitment, development, and retention strategies for critically skilled workers, NNSA would raise such concerns and require that corrective actions be undertaken. However, as we noted in our February 2011 report,comprehensive information on the status of its M&O contractor workforce. Specifically, the agency does not have an enterprisewide workforce baseline of critical human capital skills and levels for the M&O contractor workforce to effectively maintain the capabilities needed to achieve its mission. NNSA officials said this is primarily because NNSA relies on its contractors to track these critical skills. As a result, we recommended that NNSA establish a plan with time frames and milestones for the development of a comprehensive contractor workforce baseline that includes the identification of critical human capital skills, competencies, and levels needed to maintain the nation’s nuclear weapons strategy. NNSA stated that it understood all of our recommendations in that report and believed that it could implement them. NNSA has taken some actions toward this recommendation. As of March 2012, NNSA had completed a draft plan and was incorporating stakeholders’ comments. NNSA officials said that they expect to complete the final contractor workforce baseline plan by May 2012. While contractor efforts may be effective at a specific site, these efforts neither ensure long-term survival of these skills across the enterprise nor provide NNSA with the information needed to make enterprisewide decisions that have implications on human capital. NNSA officials told us that they have determined that, as the responsible federal oversight agency for its M&O contractors, they recognize that they need a comprehensive and enterprisewide outlook regarding M&O contractor workforce data, particularly the identification of the critical skills needed to maintain and sustain future capabilities, and to verify that strategies are, indeed, in place to meet future requirements. Accordingly, NNSA officials told us that they are developing the Enterprise Modeling Consortium––an initiative to, among other things, develop the needed skills data and models necessary to help NNSA manage its contractor workforces in a more proactive manner. The consortium is designed to help NNSA undertake more integrated, enterprisewide M&O contractor workforce reporting and analysis and identify the skills and competencies needed by the workforce, as well as the necessary staffing levels, based on the known and projected integrated program requirements needed to implement the Stockpile Stewardship Management Plan and associated budgeted programs for NNSA, DOE, and other federal agencies. NNSA officials told us that NNSA provided $400,000 to the Enterprise Modeling Consortium in fiscal year 2012 to fund further research and development on modeling. However, according to these officials, there is significant work left to do on the Consortium and they cannot provide an estimate for when the Consortium will be completed. Each M&O contractor collects key human capital performance data; however, we found that there are no specific, enterprisewide definitions of these data. NNSA officials told us that they have not asked M&O contractors to standardize these definitions because they believe their current system is effective. We previously reported that the lack of standard definitions for performance measurement data can significantly hinder agencies’ ability to use such data in planning and reporting. NNSA officials also told us that they believe M&O contractors have effectively used the flexibilities provided in their contracts and have demonstrated that they can identify specific critical skills needed and take the steps needed to, by and large, sustain them. However, NNSA is now considering developing a more comprehensive enterprisewide system, the Enterprise Modeling Consortium, to track M&O contractor human capital performance metrics and other workforce data and common definitions of performance metrics may become more important. Specifically, without common enterprisewide definitions of human capital performance metrics, NNSA may not be able to collect consistent and comparable data across all eight sites in the enterprise. For example, one of the M&O contractors’ key metrics—acceptance rates for offers of employment—may not be consistently measured across the enterprise. Human capital staff at one national laboratory told us they participated in a program they compared to “speed dating,” whereby candidates at a career fair may be interviewed for multiple positions and given offers of employment on the spot. However, job applicants may receive multiple offers of employment in a single day and may accept more than one offer to negotiate for a better salary or to have more time to consider their options. In such a situation, the employment offer to a candidate could be counted as an acceptance even if that candidate never became a laboratory employee. When asked about this scenario, NNSA officials stated that it was their understanding that M&O contractors were only counting as accepted offers those who ultimately reported for work, but acknowledged there was no NNSA standard definition and that they did not know for certain how such offers were counted. Successful human capital management and workforce planning depend on valid and reliable data. These data can help an agency determine: performance objectives, goals, and the appropriate number of employees, and can help develop strategies to address gaps in the number, deployment, and alignment of employees. However, NNSA has not identified or considered the potential inconsistencies in these human capital metrics; therefore, decision makers are relying on information that may not be consistently reported. NNSA and its M&O contractors face challenges in recruiting, retaining, and developing their workforces and are using several tools to address these challenges. NNSA and its M&O contractor work environments, site locations, and high costs of living pose recruiting challenges. NNSA and its M&O contractors also face shortages of qualified candidates, an aging workforce, and variable funding. NNSA and its M&O contractors are taking actions to address their current human capital challenges, where possible. Officials from NNSA site offices and M&O contractor work sites reported that their secure work environment and location make recruitment of advanced science and technology candidates more challenging. Due to the sensitive nature of nuclear weapons work, NNSA and M&O contractor sites must be more secure than most private sector laboratories or commercial plants. To meet this security requirement, laboratories and plants in the enterprise tend to be restrictive environments, isolated from security threats by geography and classification protocols. In addition to these potentially undesirable traits, in the view of some candidates, some sites are further constrained by a high cost of living. Restrictive environment. Officials from most M&O contractors reported that the restrictive environment required for nuclear weapons research and maintenance is a disadvantage in recruiting new staff with the potential to become weapons experts. Staff typically need to acquire and maintain high-level clearances and must often work in secure areas that prohibit the use of personal cell phones, personal e-mail, and social media. In particular, they told us younger candidates typically expect to stay continuously connected to their peers via cell phone and social media. Furthermore, any research completed in classified work can only be seen within the classified community; for researchers who desire broader recognition of their work and opportunities for wider collaboration, academia or private industry may be more attractive. Because of these restrictions, most M&O contractor human resources staff told us that it was more difficult to recruit younger scientists and engineers. Isolation. An isolated location may be desirable for building or maintaining nuclear weapons, but it may not appeal to some desirable candidates with advanced degrees in science, technology, and engineering. For example, Los Alamos National Laboratory officials told us that the laboratory’s relative isolation––nearly 100 miles from Albuquerque, New Mexico–– may make it less appealing to some candidates. In addition, the relative lack of other types of employment opportunities nearby may pose challenges for candidates with spouses in careers outside of science, technology and engineering. Officials at two of the three weapons laboratories told us they focus on recruiting top candidates nationwide to gain a wide breadth of thought and opinion among their staff. The laboratories track the proportion of job offers accepted but cannot always ascertain or be sure of the reason a candidate rejects an offer because, according to officials at Lawrence Livermore, candidates may simply state they declined an offer for “personal reasons.” In addition, some of the production plants and the test site are also in isolated locations and face some of the same challenges as the laboratories. However, these sites require fewer candidates with advanced degrees and can generally rely on the local workforce to fill other types of critical skills positions. For example, Savannah River Site and Pantex are also both located far from other large cities. However, because of their relative isolation, they are among the biggest employers in these areas, and many local candidates are qualified and eager to accept positions in weapons manufacturing and maintenance. Pantex officials reported that they do not have difficulty finding most workers to perform weapons maintenance, which requires a shorter amount of on- the-job training than weapons design but nonetheless requires a set of critical skills. However, site staff have had to develop strategies to attract candidates to fill those positions that require advanced degrees. Unlike the laboratories, officials at all of the production plants told us that they focus their recruiting efforts for these positions at local and regional colleges and universities. Officials at Y-12, for example, have identified competitive science and engineering programs at universities within 300 miles of their plant in Oak Ridge, Tennessee. Y-12 officials reported that they have better results in both recruiting and retaining critically skilled workers when those workers have personal ties to the area. In contrast, M&O contractor officials from the laboratories told us that they needed to recruit from the top academic programs across the country. High cost and competition. Two enterprise sites are located in areas with high costs of living, which can deter qualified candidates—Los Alamos and Lawrence Livermore. NNSA and its M&O contractors have flexibility to offer higher compensation for some critical skills, but some candidates are unwilling to live in high cost areas. For example, housing in Los Alamos is expensive and scarce. According to Los Alamos National Laboratory staff, some employees commute nearly 100 miles each way from Albuquerque every day partly due to cost of living constraints. Los Alamos Human Resources managers reported that high housing costs are a concern among current and prospective employees. Lawrence Livermore National Laboratory, located in the San Francisco Bay Area, is also a high cost area. NNSA has authorized higher salaries for some critically skilled M&O contractor employees but delays during the hiring process can give private sector recruiters an advantage with critically skilled candidates. Lawrence Livermore uses the flexibilities it has to negotiate competitive compensation, but a candidate interested in weapons work may be drawn to another site with a lower cost of living, such as Sandia National Laboratories in Albuquerque or one of the production plants. Further complicating NNSA’s recruiting efforts is the demand for qualified candidates in the private sector as well, and private sector jobs may offer a work environment that many candidates may find more desirable. The same pool of candidates who can excel in engineering, modeling, and simulation tasks is also attractive to high technology firms. For example, according to M&O contractor officials at Lawrence Livermore National Laboratory, a web-based provider of DVD rentals and streaming media uses computational scientists to predict consumers’ preferences for films, which is the same skill set the weapons laboratories would use for modeling and simulation. However, this company does not have the constraints that a federal contractor has with compensation limits and a restrictive work environment. NNSA and its M&O contractors are making workforce plans, but face shortages in qualified critically skilled candidates and an aging workforce. In addition, uncertainty about future funding makes long-term workforce development initiatives challenging to execute. The laboratories have not yet experienced any critical shortages of critically skilled workers, but they all reported that finding candidates with the appropriate qualifications is a growing recruitment challenge and that a more mobile and aging workforce is a retention challenge. Shortages of qualified candidates. NNSA officials told us that qualified candidates are in short supply and that competition from science and technology-related companies in the private sector poses additional challenges. Candidates for most critical skills positions at national laboratories must meet certain criteria, including (1) an advanced degree (master’s or doctorate) in a scientific, technical, or engineering field; (2) the ability to obtain a high-level security clearance, which requires U.S. citizenship; and (3) an interest in and willingness to learn weapons design work. The requirement for U.S. citizenship in particular is becoming an increasingly difficult criterion to satisfy in the recruitment process. National laboratory officials told us that a large percentage of students graduating from top science, technology, and engineering programs are foreign nationals. M&O contractors can hire foreign nationals to work outside of weapons-related areas, but the citizenship requirement for working on programs supporting U.S. nuclear weapons is not negotiable. In addition, national laboratory recruiting staff noted hurdles finding candidates with an interest in and willingness to learn weapons design work. For example, officials at Sandia National Laboratories told us younger candidates with the necessary qualifications are often more interested in fields that contribute to improving the environment. In addition, because of the sensitive nature of weapons work, civilian graduate programs cannot teach weapons-specific skills, so would-be weaponeers may not know whether the work suits them until after they have invested significant time working in the enterprise. Even if candidates accept a position, they do not actually have the authorization to design nuclear weapons; current policy allows them to refurbish components within the existing stockpile, and then only when funding is appropriated for that specific activity. A more mobile workforce. NNSA and M&O contractor officials noted that a general shift from defined benefit retirement systems offering pensions to a defined contribution retirement system has made employees much more mobile and, therefore, harder to retain. A defined contribution retirement system makes employees much more mobile because, once the employee is vested––typically after a few years––their contributions to their retirement accounts are portable, therefore they no longer depend on tenure with a single employer. According to NNSA officials, M&O contractors no longer expect newly hired employees to spend their entire careers in the enterprise; rather, they expect them to work for a national laboratory or production plant for an average of 5 to 10 years. Aging workforce. Many of the critically skilled employees currently filling these positions, both at the national laboratories and other NNSA sites, are at or near retirement age, which adds additional uncertainty to the projected human capital needs of the enterprise. NNSA officials told us that they are aware that many critically skilled employees are at or near retirement age, and they are tracking those retirements closely. Human capital staff from NNSA and its M&O contractors told us that it is difficult to anticipate retirement trends, especially during an economic recession. M&O contractor human resources staff said that they have found fewer staff retiring than they would have projected, due to uncertainties about their financial investments. These economic factors may have helped to preserve some critical skills within the enterprise, but officials are concerned that when the economy rebounds, eligible staff may retire at higher-than-projected levels. Such levels of attrition could leave a skills gap that would take years to replenish. Knowledge transfer activities in the nuclear security enterprise tend to require multiple years to complete, but contractors have been challenged to plan and maintain these development efforts because funding varies from year to year. NNSA officials typically do not dictate whether or how much funding goes toward knowledge transfer within contractor workforces, except for specific programs at Sandia, because NNSA prefers not to fence funding for particular contractor activities. Contractors use what NNSA calls science campaigns—which, among other things, fund research to improve the ability to assess warhead performance without nuclear testing and help to maintain the scientific infrastructure of the nuclear weapons laboratories—and life extension programs—which ensure weapons’ readiness and extend the life of existing warheads through design, certification, manufacture, and replacement of components—as a means for knowledge transfer, where more experienced weaponeers can train newer staff on weapons design and maintenance. Both science campaigns and life extension programs require long-term planning to ensure that the necessary resources are available. According to NNSA and M&O contractor officials, funding for science campaigns and life extension programs has varied over the years. M&O contractor officials at both plants and laboratories told us their knowledge transfer plans have been adversely affected in years when funding has been reduced. In recent years, plans for certain life extension programs and science campaigns have been scaled back after plans have been made and contractor resources allocated. According to M&O contractors at the laboratories, reduced funding for life extension programs diminishes their opportunities to give their newer weaponeers hands-on experience. For example, weapons staff at Lawrence Livermore National Laboratory told us that they made knowledge transfer plans based on their approved warhead life extension projects, and that when those projects were sidelined; newer weaponeers were denied significant training opportunities. However, because funding decisions are beyond the M&O contractors’ purview, M&O contractor officials told us there is little they can do to prepare for or mitigate this challenge. NNSA and its M&O contractors reported that they are taking actions to address their human capital challenges where possible. Specifically, NNSA and M&O contractor officials told us they engaged in workforce planning to avoid potential critical skill gaps in the enterprise. NNSA-wide workforce plans are not expected to be completed until 2013 according to NNSA officials, but certain components are already in practice at various sites, such as streamlined hiring and security clearance practices and “pipeline” building for critically skilled employees. Streamlined hiring and security clearance processes. NNSA and its M&O contractors have streamlined human capital processes to attract and hire new critically skilled workers. In the past, federal hiring processes have caused longer waits, both for candidates awaiting a decision and for human capital officials awaiting security clearances for new hires. M&O contractor staff reported that delays had previously allowed strong candidates to find other opportunities, or if candidates were hired and waiting for a clearance, they could lose interest in the position before they started. M&O contractor staff told us that finding work for hired-but- uncleared staff to complete was frustrating for both the new staff and their supervisors. NNSA has made reducing cycle time a priority, and officials from several sites reported that they have been able to hire and obtain clearances for employees more quickly in recent years. Building a pipeline of critically skilled employees. Both NNSA and its M&O contractor officials acknowledge that, due to the long period required for developing some critical skills employees, they need to anticipate their critical skills needs for multiple years in the future. All sites have recruiting and development plans to preserve critical skills in their workforce, which they refer to as a pipeline. Sites use pipelines in two ways to avoid critical skills gaps. First, they use training and project assignments to ensure that critical skills are being developed and preserved in newer employees. For example, Lawrence Livermore has assessed its employees’ skill sets and experience, so it knows which are currently performing essential operations more than 25 percent of the time––called core employees–– and which are being prepared to perform those operations––called pipe employees. They can augment a pipe employee’s expertise in an area if management sees a shortage of core employees in that skill set. Second, in recruiting activities, human resources staff may maintain information about potential future candidates for weapons programs, either with contacts made in internship, fellowship, and coop programs or by keeping records of interested candidates who were not hired. For example, Sandia is building a database of potential candidates, so that in the future it is not relying exclusively on that year’s graduating class from the top science and engineering programs. Succession planning can also inform pipeline decisions. M&O contractor officials at some sites said that they have begun to analyze potential skills gaps if a specific retirement or separation were to occur. Those M&O contractors who are undertaking these analyses can rely on managers’ assessments of their employees or software packages designed to facilitate succession planning. M&O contractors told us that this kind of planning is currently used in management or leadership capacities, but in the future it could be applied to other areas such as critical skills capacities. Each M&O contractor has a unique way of implementing its pipeline, but M&O contractor officials from all sites told us they all realize the need to consider future retirements and mission requirements in their current hiring and development plans. For example, a senior M&O contractor manager at Sandia National Laboratories responsible for building the laboratories’ talent pipeline told us that Sandia is facing unprecedented hiring needs due in part to expected increases in retirements. He expects to experience 33 to 50 percent attrition in the next 4 to 5 years, while the total number of Sandia employees will need to remain about the same. Accordingly, Sandia officials told us they expect to have hired approximately 3,100 new employees in the 3 years ending in 2012—about 800 in 2010, 1,100 in 2011, and 1,200 in 2012. Some of the human capital challenges facing the enterprise are beyond the control of NNSA and its M&O contractors, and in these cases, NNSA has authorized increased compensation to help the sites acquire or retain the personnel they require. The site locations are fixed, and site staff cannot change the number of U.S. citizens completing graduate science and technology programs. Similarly, NNSA and its contractors have no choice but to adapt to the increased mobility of their staff resulting from the shift to a defined contribution retirement systems. To mitigate these challenges, NNSA and its contractors continue to offer financial incentives to recruit and retain critically skilled employees, with competitive starting salaries. The scale of these financial incentives can vary by location and position, but NNSA reported that this strategy has thus far been adequate for recruiting and retaining the talent they need. NNSA and its M&O contractors have taken a number of useful steps to sustain critical skills in the enterprise in the face of several challenges. NNSA has begun to implement the recommendation we made in our February 2011 report to establish a plan with time frames and milestones for the development of a comprehensive contractor workforce baseline that includes the identification of critical human capital skills, competencies, and levels needed to maintain the nation’s nuclear weapons strategy. However, while contractor efforts may be effective at a specific site, they do not provide NNSA with the information needed to make enterprisewide decisions that have implications on human capital. Without this information, NNSA’s ability to monitor the effectiveness of its and its M&O contractors’ strategies to recruit, develop, and retain the workforces needed to preserve critical skills may be hindered. In particular, without common enterprisewide definitions of human capital performance metrics, NNSA may not be able to collect consistent and comparable M&O contractor human capital data across all eight sites in the enterprise. Since NNSA is now considering developing a more comprehensive enterprisewide system to track data on critical skills through its Enterprise Modeling Consortium, this may be an opportune time to explore establishing common, uniform definitions for the human capital metrics used in this system. To improve NNSA’s ability to monitor the effectiveness of its strategies–– and its M&O contractors’ strategies––to recruit, develop, and retain the workforces needed to preserve critical skills in the enterprise, we recommend that the Administrator of NNSA take the following action: As it develops its Enterprise Modeling Consortium and other enterprisewide systems for tracking M&O contractor human capital performance metrics, NNSA should consider developing standardized definitions across the enterprise, especially across M&O contractors, to ensure they gather consistent data using human capital metrics with consistent, uniform definitions. We provided NNSA with a draft of this report for their review and comment. NNSA provided written comments, which are reproduced in appendix I. NNSA stated that it appreciated GAO’s recognition of the significant challenges NNSA faces in sustaining critical skills in its workforce and the efforts NNSA is taking to identify critical human capital skills, competencies, and levels needed to maintain the nation’s nuclear weapons strategy. In addition, NNSA stated that it agreed with the GAO’s recommendation that NNSA should consider developing standardized definitions for human capital metrics across the enterprise to help ensure consistent and comparable data. NNSA also provided other additional technical information, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Energy, the Administrator of NNSA, the appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Ned Woodward, Assistant Director; Dr. Timothy Persons, Chief Scientist; Don Cowan; Hayley Landes; and Kevin Tarmann made key contributions to this report. Yvonne Jones, Alison O’Neill, Cheryl Peterson, Rebecca Shea, Kiki Theodoropoulos, and Greg Wilmoth provided technical assistance.
NNSA has primary responsibility for ensuring the safety, security, and reliability of the nation’s nuclear weapons stockpile. NNSA carries out these activities at three national labs, four production sites, and one test site—collectively known as the nuclear security enterprise. Contractors operate these sites under management and operations (M&O) contracts. The enterprise workforces often possess certain critical skills that can only be developed through a minimum of 3 years of experience working in a secure, classified environment. Because NNSA could have difficulty maintaining the critically skilled workforces necessary to ensure the safety, security, and reliability of the nation’s nuclear weapons, GAO was asked to examine: (1) strategies NNSA and its M&O contractors use to recruit, develop, and retain critically skilled workforces; (2) how NNSA assesses the effectiveness of these strategies; and (3) challenges in recruiting, retaining, and developing this specialized workforce and efforts to mitigate these challenges. GAO reviewed NNSA’s and its M&O contractors’ human capital documents and interviewed officials. The National Nuclear Security Administration (NNSA) and its M&O contractors have developed and implemented multifaceted strategies to recruit, develop, and retain both the federal and contractor workforces needed to preserve critical skills in the enterprise. NNSA’s recruiting and retention efforts for its federal staff focus on attracting early career hires with competitive pay and development opportunities. Its development efforts generally rely on two key programs to develop its critically skilled workforce––one that identifies needs and another that identifies the qualifications necessary to meet them. For strategic planning purposes, NNSA is also undertaking a comprehensive reassessment and analysis of staffing requirements to ascertain future federal workforce requirements. M&O contractors’ recruitment and retention strategies vary from site to site, but each site focuses on maintaining competitive compensation packages. Their development efforts vary in approach and scope and face some challenges––particularly in preserving underground nuclear testing skills. To assess the effectiveness of its own––and its M&O contractors’––strategies for recruiting, developing, and retaining the workforces needed to preserve critical skills, NNSA monitors key human capital metrics. NNSA focuses on two key metrics in assessing its own strategies—the time it takes to hire a new employee and its attrition rates. To assess the effectiveness of its contractors’ strategies, NNSA monitors key human capital metrics using data that M&O contractors collect, including acceptance rates, attrition rates, comparability of pay and benefits with peer institutions, and the ability to fill a critical skills position within a certain number of days. M&O contractors assess key human capital performance measures, but these metrics do not have standardized definitions. For example, one of the M&O contractors’ key metrics—acceptance rates for offers of employment—may not be consistently measured across the enterprise. Without this information, NNSA’s ability to monitor the effectiveness of its and its M&O contractors’ strategies to recruit, develop, and retain the workforces needed to preserve critical skills may be hindered. In particular, without common enterprisewide definitions of human capital performance metrics, NNSA may not be able to collect consistent and comparable data across all eight sites in the enterprise. The enterprise’s work environments and site locations pose recruiting challenges, and NNSA and its M&O contractors face shortages of qualified candidates, among other challenges. For example, staff must often work in secure areas that prohibit the use of personal cell phones, e-mail, and social media, which is a disadvantage in attracting younger skilled candidates. In addition, many sites are geographically isolated and may offer limited career opportunities for candidates’ spouses. Critically skilled positions also require security clearances—and therefore U.S. citizenship—and a large percentage of students graduating from top science, technology, and engineering programs are foreign nationals. The pool of qualified candidates is also attractive to high technology firms in the private sector, which may offer more desirable work environments. NNSA and its M&O contractors are taking actions to address these challenges where possible, including streamlining hiring and security clearance processes and taking actions to proactively identify new scientists and engineers to build a pipeline of critically skilled candidates. GAO recommends that NNSA consider developing standardized definitions for human capital metrics across the enterprise to ensure NNSA and its M&O contractors gather consistent contractor data. NNSA concurred with GAO’s recommendation.
DOD’s program to provide collective protection is managed by the Joint Project Manager for Collective Protection under the Joint Program Executive Office for Chemical and Biological Defense (JPEO). The JPEO has overall responsibility for research, development, acquisition, fielding, and other aspects of support for chemical, biological, radiological, and nuclear defense equipment, as well as medical countermeasures and installation protection in support of the National Military Strategy. As one of eight project managers in the JPEO, the mission of the Joint Program Manager for Collective Protection is to develop, procure, and field collective protection equipment that protects U.S. forces from chemical, biological, and radiological contamination. Between fiscal years 2002 to 2005 DOD’s procurement budget for the overall chemical and biological defense program totaled about $2.4 billion, including about $218 million for collective protection. During fiscal year 2006, the procurement budget for collective protection totaled about $31.4 million. Most of these funds, about $16.2 million, were budgeted for the procurement of expeditionary medical shelters; another $10.4 million was budgeted for installation of collective protection equipment on certain classes of Navy ships; and another $5 million was budgeted to provide collective protection for field hospitals. The Joint Program Manager for Collective Protection has no program to fund the integration of collective protection systems into buildings. Funds for this type of collective protection often come from military service construction or operations and maintenance program funds. Although the Guardian Installation Protection Program under the JPEO was originally designed to provide some funding for collective protection and other installation protection improvements, this program was primarily focused on domestic installations and its funding has been substantially reduced. In making decisions regarding whether to seek funding for collective protection under DOD’s risk management approach, commanders first conduct threat assessments to identify and evaluate potential threats to their facilities and forces, such as terrorist attacks, using intelligence assessments of such factors as capabilities, intentions, and past activities. The intelligence community continuously assesses the chemical and biological warfare threats to U.S. interests around the world, and the individual agencies issue finished intelligence products with those assessments. Under the leadership of the Office of the Director of National Intelligence, the National Intelligence Council coordinates and issues periodic national intelligence assessments reflecting the overall intelligence community’s assessments and judgments on the current and future threat from chemical and biological warfare and other threats. Following the threat assessments, commanders also use vulnerability and criticality assessments as additional inputs to the decision-making process for making investments. Vulnerability assessments are conducted to identify weaknesses that may be exploited by the identified threats and to suggest options that address those weaknesses. For example, a vulnerability assessment might reveal weaknesses in security systems, computer networks, or unprotected water supplies. Criticality assessments are conducted to evaluate and prioritize important assets and functions for funding in terms of factors such as mission and significance as a target, helping to reduce the potential for expending resources on lower priority assets. The intelligence community is struggling with the changing security environment, including gaining agreement on issues such as how best to provide decision makers with a more candid recognition of the significant uncertainties in its ability to assess the chemical and biological threat. These problems have challenged the community’s development of assessments—such as the National Intelligence Estimate on chemical warfare, which has not been updated since 2002—to help guide DOD and other government agencies’ risk assessments and investment decisions. Generally, the two primary chemical and biological threats facing DOD installations are from adversarial nations using missiles with chemical or biological warheads and from terrorists using explosive devices or other means to release and spread chemical or biological agents. The missile threat is currently assessed with varying levels of confidence to stem primarily from a handful of countries, and DOD expects this threat to increase in coming years as these countries continue to improve their missile programs. The terrorist threat stems primarily from al Qaeda, and while presently limited regarding chemical and biological weapons, this threat is also expected to increase as al Qaeda continues to try to acquire chemical and biological agents. Despite these assessments, the intelligence community has recently recognized significant uncertainties in the quality and depth of intelligence about those threats. Such uncertainty raises questions about the operational impact that might be sustained during an attack and the actual threat posed by our adversaries, and is thus critical information for officials making risk management decisions on investments to protect U.S. forces. However, while the intelligence community has been able to work together and issue a new 2006 National Intelligence Estimate assessing and recognizing the uncertainties in the biological warfare threat to help decision makers, it has not been able to issue a revised national intelligence estimate on the chemical warfare threat since 2002. The possibility of attack from nation states using missiles—or, in some cases, artillery or Special Forces—to spread chemical or biological agents is viewed as posing a significant threat to U.S. overseas installations. DOD intelligence assessments indicate that the current threat stems mainly from a handful of countries and DOD expects this threat to increase. Intelligence estimates assess that several other countries also have chemical and biological warfare capability and the missiles to deliver agents. However, these countries are not assessed as major threats since our relationships with them are not as adversarial as with the primary threat countries. The intelligence community assesses that the primary threat countries have the capability to produce at least some types of chemical or biological agents, although there is considerable uncertainty regarding many important aspects of these countries’ chemical and biological warfare programs. They are also assessed to possess the missiles to deliver them, even though in most cases it is unclear whether they have actually produced, weaponized, or stockpiled any agent. Reports also indicate that the missile inventories of these countries are composed primarily of SCUDs or their variants, with ranges of 300 kilometers to 700 kilometers. Figure 1 shows a SCUD B missile with launcher. In addition, the three primary threat countries are assessed not only to be actively pursuing technological improvements to these SCUDs and other ballistic missiles to increase accuracy, range, and survivability but also pursuing the development of new missile systems. For example, intelligence reports indicate that one country is trying to extend the range and accuracy of some of its existing ballistic missiles and is also developing a solid propellant medium range missile with a range of at least 2,000 kilometers. Similarly, intelligence reports indicate that another of the primary threat countries continues to pursue an intercontinental ballistic missile and continues to develop extended range SCUDs and variants for its medium range missiles that will likely enhance its warfighting capabilities and complicate U.S. missile defense systems. Intelligence officials believe that terrorists, primarily al Qaeda, continue to try to acquire chemical and biological agents and therefore pose a threat to overseas DOD installations. While the actual status of al Qaeda’s acquisition and development of chemical and biological agents is unclear and its access to effective delivery methods presently is limited, some intelligence agencies expect this threat to increase. For example, some intelligence reporting projects that over the next decade terrorists are likely to conduct a chemical attack against United States’ interests either at home or overseas. Future delivery methods could include such devices as balloons, crop sprayers, mortars, or unmanned aerial vehicles. During our review, 22 countries overseas were assessed as being at high risk of some type of terrorist attack. DOD expects both adversarial nation states and terrorists to increase their chemical and biological warfare capabilities. However, as acknowledged by intelligence agencies and officials, and highlighted by the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction in its report to the President, the intelligence community has struggled to handle the changing security environment. These struggles include significant uncertainty regarding important aspects of the chemical and biological threat and how to communicate assessments of those threats. These problems can undermine the ability of the intelligence community to develop assessments—such as the National Intelligence Estimate on chemical warfare, produced under the leadership of the Director of National Intelligence. The Estimate has not been updated since 2002 and would help guide DOD and other government agencies’ risk assessments and investment decisions. As discussed in the Commission’s report, many of the intelligence community’s assessments on secretive nations like Iran and North Korea rely largely on inherently ambiguous indicators, such as capabilities assessments, indirect reports of intentions, deductions based on denial and deception efforts associated with suspect weapons of mass destruction sites, and ambiguous or limited pieces of “confirmatory” evidence. As a result, significant uncertainty arises regarding important aspects of states’ actual ability to employ chemical and biological warfare agents in ways needed to cause large-scale casualties. However, as noted in the Commission’s report, in past years the intelligence community may not have clearly communicated that uncertainty and dissenting opinions about assessments based on that information, to decision makers in an attempt to provide a “consensus” assessment. According to intelligence officials, in the wake of the intelligence failures in Iraq, the community is attempting to develop reforms such as providing better assessments that more candidly recognize the uncertainties in the intelligence, and dissenting views regarding the meaning of such information; as well as reforms in areas such as the terms and definitions used to describe the severity of the threat. According to these officials, notwithstanding the attempts at reforms, there are continuing difficulties in gaining agreement on such issues which can delay issuance of assessment information. For example, we were able to obtain the recent 2006 national intelligence estimate on the biological warfare threat. However, we were not able to obtain a recent national intelligence estimate on the chemical warfare threat because it remains in development. The chemical warfare estimate was last updated in 2002. With respect to specific chemical and biological warfare capabilities of individual nation states, we found significant uncertainties regarding the ability of the primary threat countries to use sophisticated dissemination techniques to effectively disperse chemical and biological agents and cause large scale casualties. Most ballistic missiles currently in their arsenals, such as the SCUD and its variants, are relatively inaccurate, and this inaccuracy increases with the range to the target. Accordingly, techniques such as “air bursting” or “submunition” warhead loads may be used to compensate for this inaccuracy. Air bursting, which is literally the bursting of a warhead filled with chemical or biological agents in the air, can dramatically increase the area of contamination compared to the use of warheads bursting on the ground. Similarly, submunitions—which are small bomblets inside a warhead—also improve agent dissemination by covering an area more evenly than bulk filled munitions. Submunitions also provide the opportunity to deliver agents such as sarin that are not robust enough to survive release subsequent to a ground detonation or supersonic airburst. There is also significant uncertainty regarding terrorists’ ability to acquire and disseminate chemical and biological agents. Unclassified intelligence information states that al Qaeda is interested in acquiring or producing chemical warfare agents such as mustard gas and Sarin, but it is unclear if it has actually acquired any chemical or biological agents. However, as we reported in 1999, there are many technical challenges that terrorist groups such as al Qaeda would have to overcome in order to cause mass casualties using sophisticated chemical and biological warfare agents. For example, while terrorists do not need specialized knowledge or dissemination methods to use simple toxic industrial chemicals such as chlorine, they would need a relatively high degree of expertise to successfully cause mass casualties with sophisticated agents, such as VX and anthrax. As such, some intelligence reporting concludes that given our limited access to the al Qaeda organization and its heightened sense of operational security, the U.S. intelligence community may not be able to confirm that it has that capability until it is actually used. Combined with the uncertainty of the threat as previously discussed, commanders face the difficulty of identifying their vulnerability to that threat and how best to protect against it. In judging the vulnerability of his or her command to that threat, the commander determines whether to have collective protection, and if so, what type of protection is most appropriate and what functions need to be protected. At the critical facilities identified by the combatant commanders, we found that collective protection equipment was not widely or consistently available. The reasons for the limited and inconsistent fielding of collective protection appear to be rooted in unclear and inconsistent guidance on the use of collective protection. For example, while DOD guidance encourages the use of collective protection, it does not prescribe specific criteria to guide strategic decisions on its use. Moreover, guidance provided by the individual military services—excepting the Air Force—is often vague, inconsistent, or both with respect to key issues. Such issues include whether local commanders make the decision to provide or not provide the protection or the services prescribe those decisions, as is done in the Air Force; what type of collective protection is most appropriate; and what functions need to be protected. Similarly, we also found collective protection equipment shortages and inconsistent guidance affected some major expeditionary warfighting assets, such as infantry units, naval vessels, and medical units. The intelligence uncertainties and vague and inconsistent guidance all combine to make it difficult for commanders to make clear risk management assessments of the need for collective protection and the risks of not providing it. Officials from the four regional combatant commands responsible for overseas operations identified 125 critical sites in 19 countries as critical to their operations, 97 of which did not have collective protection. Moreover, two-thirds of the critical sites in high threat areas did not receive collective protection. In addition, the department did not have an overall DOD-wide list of sites formally identified as critical despite long- standing requirements to identify and prioritize such sites. As a result, in conjunction with several DOD offices, we developed a definition of the term critical and requested that the four regional combatant commanders identify sites meeting that definition. The 125 sites identified as critical by the combatant commanders are located on 64 large installations and other facilities and included many command and control centers; many intelligence, communications, logistics, and medical facilities; and a number of air bases. These facilities were spread across the Middle East, Europe, Asia, and the Pacific and were largely concentrated in four countries. As shown in table 1, 28 of these sites (22 percent) had collective protection equipment available to allow personnel to continue operations in case of attack. The limited amount of collective protection we found is consistent with the findings of our earlier reports dating back to at least the late 1990s. For example in 1997, we reported that few defense facilities in Southwest Asia and South Korea had collective protection. While collective protection was limited in all commands, it was also not consistently fielded in high threat areas. As shown in table 1, 24 of the 28 sites with collective protection equipment were located in areas assessed to be at high risk of attack by terrorists or within range of missile attack by the primary threat countries. However, the 24 sites with collective protection totaled about one-third of the total of 71 critical fixed facilities in high threat areas. For example, 12 of the sites with collective protection were located in one country, which is assessed to have a moderate threat of attack from terrorists, but is within range of attack from a nearby hostile nation. The Army identified 4 of its sites in this country as critical to its mission, but only 2 of the sites had collective protection. Additionally, a 2004 DOD security assessment identified 1 of those 2 sites as having major shortcomings in collective protection equipment, which raised questions about the command post’s viability as a warfighting command center. The Air Force provided all 10 of the critical sites on its air bases in this country with collective protection, but critical air bases in another nearby country did not have collective protection despite also being in range of missile attack by the hostile neighbor. Air Force officials told us they view the threat in this country as moderate. Similarly, the Navy provided collective protection to its five critical sites in one country in the Middle East, which is assessed as being at high threat of terrorist attack and within range of missile attack from a nearby hostile country. However, none of the four critical sites on a key air base in another nearby country were provided with collective protection, despite also being assessed at high threat of terrorist attack and being within range of missile attack from the same hostile country. According to Air Force officials, while there is no collective protection currently at the base, they plan to provide such equipment in the future. While it is difficult to precisely specify the ultimate reasons for the limited and inconsistent fielding of collective protection, the quality of guidance on the use of the equipment appears to have been a contributing factor since it was often unclear and inconsistent. DOD does not provide clear overarching strategic guidance on many key issues that would help commanders make decisions on the use of collective protection. Military services and installation commanders are generally expected to address key issues that include what level of threat justifies the investment in collective protection. DOD guidance generally encourages the use of collective protection and provides information on, among other things, the nature of the chemical and biological threat to installations and forces, the types of equipment available, and the pros and cons of using each, but it does not prescribe criteria to guide the use of collective protection. For example, in determining what level of threat justifies the investment in collective protection, the commander assesses vulnerability from both terrorist attack and missile attack. However, as discussed earlier, intelligence on these threats does not make clear whether terrorists, such as al Qaeda, possess the capability to produce mass casualties through the use of chemical or biological weapons. A number of officials told us that they believed the provision of collective protection equipment should be targeted only at installations at high risk of missile attack, given limited DOD resources and the likelihood that terrorist attacks alone lack the capability to produce large-scale damage. However, the guidance does not establish criteria differentiating between the two types of attacks, which would help guide decision making. In addition to DOD’s lack of guidance, military service guidance on the use of collective protection, excepting the Air Force, is often vague, inconsistent, or both. For example, the Army, the Navy, and the Marine Corps do not require collective protection to be provided at their critical fixed facilities or other fixed facilities. Rather, these services rely on the discretion of their local installation commanders to determine whether to have the protection, what type of collective protection should be provided, and which functions should be protected. In contrast, Air Force policy requires that in the absence of guidance from higher commands, Air Force commanders should plan to provide collective protection for 30 percent of the personnel on their bases in areas judged by the intelligence community to be at high risk of attack from terrorists or other non state actors or attack from missiles launched by adversarial nations. Consistent with the Air Force requirement for collective protection, it had the most critical sites with the equipment. Of the 50 critical sites the Air Force operated, 16 had collective protection. Meanwhile, the Army operated 51 critical sites and provided 7 sites with collective protection, while the Navy operated 23 critical sites and provided 5 with collective protection. Once the decision to provide collective protection equipment is made, the services—again excepting the Air Force—lack specific guidance to determine what type of protection is most appropriate and what functions need to be protected. The critical facilities identified in our review used both integrated systems—with overpressure and filtration systems built in to existing buildings—as well as simple portable tent systems. Eighteen of the 28 sites had the overpressure and filtration systems integrated into the construction of the buildings, while 10 sites had portable systems such as tents with liners and filtration systems, which could be erected inside the buildings or set up at locations around the installations. While both can provide protection for groups of various sizes, costs vary significantly depending upon factors such as square footage to be protected and other construction elements. According to officials, the portable tent systems may cost as little as $18,000 depending on the configuration. However, a recent installation of an integrated system at Andrews Air Force Base in Maryland cost about $1.8 million. In addition, local commands must divert existing operations and maintenance funds to pay for the replacement filters and other costs to sustain the integrated collective protection systems over time. According to officials, this creates a significant disincentive to the initial procurement of integrated collective protection equipment. Finally, we also found little clear guidance regarding which functions should be protected. Commanders generally do not have guidance to help them determine whether to provide protection for command and control functions, medical treatment facilities, areas for rest and relief, and other base functions, or to cover only parts of these functions. Only the Air Force provided clear guidance on this issue. As discussed above, Air Force regulations state that commanders should plan to provide collective protection for at least 30 percent of base personnel. These regulations also describe requirements for coverage of specific functions, including command and control, medical facilities, and dormitories and dining facilities, and the level of protection required for each. During our discussions at the combatant commands we noted that the other services often had different views on the costs and benefits of the Air Force requirement. The intelligence uncertainties and vague and inconsistent guidance all contribute to the difficulty commanders face in making clear risk management assessments of the need for collective protection or of the risk of not providing it. In the absence of clear guidance to aid such decisions, the potential for inconsistent and inefficient allocation of DOD resources increases. Similar to the inconsistent availability of collective protection for critical overseas fixed facilities, collective protection equipment shortages and inconsistent requirements also affected some major expeditionary warfighting assets, such as infantry units, naval vessels, and medical units (see table 2). While differing missions and other factors may explain inconsistencies in the use of collective protection, no clear guidance was evident in many cases to explain why forces operating in similar environments were not provided the same level of protection against chemical or biological attack. Despite operating in similar environments in areas such as Iraq and Afghanistan, Army and Marine Corps infantry units had different requirements for collective protection. For example, according to Army officials, the Army requires its light infantry units at the battalion level to provide collective protection equipment (M20/M20A1 Simplified Collective Protection Equipment Shelters), but the unit commander must make the decision to actually request this equipment. Army officials told us that as of August 2006, commanders had requested and received 2,506 of the total Army authorization of 3,558 (70 percent). However, they could not provide details on the units requesting the shelters because their systems do not track non major end items. In contrast, Marine Corps officials stated that they had no requirement for collective protection and no systems on hand. According to these officials, the current systems that are available are too large and bulky to be carried with their fast-moving infantry units. They preferred to depend on avoidance and decontamination techniques to mitigate any potential chemical or biological threat. However, Marine Corps officials also acknowledged their potential vulnerability and the need for collective protection in documents dating back to at least 2002. Despite the acknowledged need for the systems, concerns were subsequently raised that analyses of the workload requirements for setup, installation, and maintenance requirements, as well as formal techniques and tactics on their use, would be needed before any collective protection systems could be fielded. According to Marine Corps officials, these requirements had not been completed at the time of our review. Navy guidance has for many years required ships, such as aircraft carriers, destroyers, frigates, and some supply ships to have prescribed levels of collective protection equipment. However, as shown in table 3, about 47 percent of naval vessels required to have collective protection have such protection actually installed. According to Navy officials, many of these ships were built prior to the requirement for collective protection, and funds to retrofit these ships have been limited. Navy guidance requiring collective protection also appears outdated, inconsistent, or both in some areas. For example, according to Navy officials, funding limitations have required them to focus existing resources on those ships operating closer in to shore in “littoral” waters, since these ships are more likely to be exposed to chemical or biological agents than ships operating further out in deeper “blue water.” However, the Navy guidance continues to require that aircraft carriers, which generally operate in deep water far from shore, have collective protection installed. Navy officials told us that they believed that the requirement was originally based on the threat of Cold War Soviet naval tactics, and that the guidance had not yet been updated to reflect the current threat environment. We also found inconsistencies in the guidance regarding supply ships, such as station ships (required) and shuttle ships (not required), operating in littoral waters. We also found inconsistencies and shortages of collective protection at medical units, such as small units that travel with their parent infantry units and large hospital systems designed to be set up in rear areas. These problems create military limitations and increase risks to U.S. forces and capabilities. For example, Army infantry units contain medical support groups, such as battalion aid stations, that deploy with the parent unit into battlefield areas. Army guidance requires these medical units to have a certain number of Chemical and Biological Protective Shelters consisting basically of tents with protective linings and overpressure systems attached to the backs of transport vehicles (see fig. 2). In contrast, the Marine Corps had not established any requirements for its medical units to have these systems. According to Marine Corps officials, avoidance and decontamination strategies are their preferred method for handling chemical or biological events while operating on the battlefield. In addition, according to DOD officials, the Marine Corps often moves in small air and sea transports with little room for collective protection equipment, consistent with its traditional strategic mission. As a result, Marine Corps units may use Army medical support in the areas where they are deployed. However, the increasing use of joint operations, where both operate in the same geographic area at the same time, may be blurring traditional missions. While the Army requires its medical support units to have collective protection systems, Army figures indicate that only 191 of the 1,035 required systems (18 percent) were on hand as of the end of fiscal year 2005. This situation is similar to that found in our 2002 review of Army medical units in South Korea, when we found that only about 20 percent of the required systems were scheduled to be purchased. The JPEO, which procures these systems for the military services, has plans to procure additional systems through fiscal year 2014. However, the planned funding for these systems is lagging behind requirements, and the office will not be able to procure all the needed systems by 2014. Officials told us that only about 60 percent of the funding needed has been budgeted, and they need an additional $323 million to fulfill all requirements. Collective protection for larger expeditionary hospital operations is provided by large portable tent systems with liners and pressurized interiors, which may be combined to provide 200 to 300 beds or more. The Army, Navy, and Air Force all have versions of these mobile hospitals (see fig. 3). However, while the Air Force generally met its goal, shortages and other serious problems continue to affect Army and Navy medical facility collective protection. According to Army officials, the Army acquisition goal was to have 23 of these systems on hand, but it was only able to obtain 14 because of funding limitations. Similarly, Navy officials told us that they only had enough tent liners to protect about 460 beds of the approximately 2,220 total bed spaces currently required. Moreover, the collective protection liners used to make the hospital tent systems resistant to chemical and biological attack were not located with the tents, which were prepositioned at various sites around the world. The liners were located at a site in Virginia and would need to be moved to the same locations as the hospital tent systems in order to provide a collective protection capability. According to Navy officials, the Navy is aware of this shortfall and is in the process of redesigning the requirements to provide collective protection for its mobile fleet hospital tent systems. We reported similar shortfalls in collective protection equipment at Army, Navy, and Air Force portable hospital systems in South Korea in our 2002 report. Our current review found that the Air Force generally met its goal for the transportable hospital systems. According to data provided by the Air Force, as of May 31, 2006, it had 156 of 162 (96 percent) required systems on hand. Marine Corps officials told us that the Corps does not establish such large transportable hospital operations and it has no systems in stock, instead relying on the Navy to provide for Marine needs in this area. Our prior work and that of several DOD offices has highlighted DOD’s fragmented framework for managing the strategic use of collective protection and other installation protection activities. This, combined with the lack of agreed upon installation priorities guided by the robust application of risk management principles, makes it difficult for the department to ensure that funding resources are allocated efficiently and effectively. More specifically, opportunities to target funds to improve preparedness and protect critical military personnel, facilities, and capabilities from attacks using weapons of mass destruction may be lost. Responsibilities for installation protection activities are spread over a variety of DOD organizations and programs. These programs are designed to address protection from threats ranging from terrorist attacks to industrial accidents; however, with their different operating definitions and evolving concepts, gaps and inefficiencies in collective protection program coverage are created. In a 2004 report, we recommended that DOD designate a single authority with responsibility for unifying and coordinating installation protection policies. However, despite DOD’s agreement with that recommendation it has not yet implemented it. These problems also prevent DOD from reaching agreement regarding departmentwide standards to identify which facilities and infrastructure are critical and compile an overall list of critical facilities prioritized for receiving funds for protection improvements. DOD policies and resulting management activities that direct the strategic use of collective protection and other installation protection activities are fragmented and disjointed. Responsibilities for key installation protection activities such as (1) policy and oversight, (2) installation threat and vulnerability assessments and risk management decisions on appropriate protections, and (3) funding programs for installation protection improvements are spread across a variety of programs and DOD organizations, as shown in figure 4. No single DOD organization has responsibility for developing unified policy and coordinating these activities. The variety of DOD organizations bring their own approaches to policy and programs for installation protection, and these different approaches can result in unresolved conflict and inefficient application of resources. For example, responsibilities for installation protection (including collective protection) reside primarily with installation commanders, regional combatant commanders, the military services, and the Joint Staff. At the same time, responsibilities for policy and oversight of installation protection activities, such as the antiterrorism program, are spread among the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict, the Assistant Secretary of Defense for Homeland Defense, and others. Special Operations and Low Intensity Conflict developed worldwide antiterrorism policies and standards. However, Homeland Defense is responsible for providing policy and oversight of domestic antiterrorism activities. Responsibilities for making installation threat and vulnerability assessments and risk management decisions on collective protection or other needed improvements are also spread across multiple organizations and levels. For example, local installation commanders have basic responsibility for these activities, but the military services, combatant commanders, and others with responsibilities for missions taking place at the installations are also involved. At the same time, organizations such as the Defense Threat Reduction Agency and Joint Staff are involved in providing over 20 different types of formal assessments of installation vulnerabilities. For example, the Defense Threat Reduction Agency conducts Joint Staff Integrated Vulnerability Assessments, which examine the vulnerability of large installations with 300 or more personnel to a terrorist attack and the potential for mass casualties and large-scale loss of life. The agency as well as others may also conduct “full spectrum vulnerability assessments.” As the name implies, these assessments examine an installation’s vulnerability to a wide range of threats that could interrupt its ability to fulfill its mission, including attacks using chemical or biological agents, attacks against information networks, and attacks against supporting non-DOD infrastructure. Similarly, funding for installation protection improvements also involves a variety of organizations. For example, the combatant commanders have no programs of their own to fund improvements at overseas facilities important to their warfighting needs. According to combatant command officials, much of the funding for improvements at the overseas installations comes from the construction or operations and maintenance programs of the military services that operate them. The JPEO Guardian Installation Protection program provided another potential source of funding, but the program has faced a number of problems. The Guardian program was initiated in 2004 to provide improvements to protect critical facilities from attacks ranging from terrorists to nation states using chemical, biological, radiological, or nuclear weapons. The program was initially provided approximately $1.2 billion in funding for improvements at 185 domestic and 15 overseas sites from fiscal years 2004 through 2009. However, DOD recently cut funding for the program by about $760 million. According to officials, because of the cuts, they stopped funding for collective protection and other such improvements while the role of the program and its list of projects were being reviewed by DOD. Antiterrorism programs also provide some potential funding. Oversight of resources used for overall antiterrorism activities is conducted by the Assistant Secretary for Special Operations and Low Intensity Conflict, while oversight of resources used for domestic antiterrorism activities is conducted by the Office of the Assistant Secretary for Homeland Defense. We and several DOD offices have reported on problems associated with the fragmented installation protection program structure. For example in August 2004, we reported that the large number of organizations engaged in efforts to improve installation preparedness, and the lack of centralized authority and responsibility to integrate and coordinate departmentwide installation preparedness efforts were hindering overall preparedness efforts and DOD’s ability to ensure that its resources were applied efficiently and effectively. Officials at the department, Joint Staff, service, and installation levels told us that the lack of a single focal point to integrate departmentwide installation preparedness efforts among the many involved organizations adversely affected their ability to resolve disagreements and develop needed overarching guidance, concepts of operations, and chemical and biological defense standards. Because of the absence of departmentwide standards, military services and installations faced problems in prioritizing requirements for funding and personnel resources, since such standards provided the basis for calculating requirements. We recommended that DOD designate a single authority with the responsibility to coordinate and integrate worldwide installation preparedness improvement efforts at the department, service, and installation levels. In May 2006, the DOD Inspector General reported that the problems with the fragmented and disjointed program structure were continuing. According to the report, responsibilities for installation protection activities continued to be spread across multiple programs and organizations, with no single DOD organization responsible for unifying and coordinating these activities. Problems such as inadequate program structure, inadequately coordinated program concepts, and a lack of generally accepted terminology describing concepts and doctrine resulted in confusion and disagreement in attempts to establish policy and assign responsibilities, inefficient application of resources, and the lack of a strategic vision balancing all areas of program responsibility. For example, the report found that the lack of clear lines of authority and responsibilities for installation protection activities between the Assistant Secretary for Special Operations and Low Intensity Conflict and the Office of the Assistant Secretary for Homeland Defense was causing confusion and inefficiency. In this regard, coincident with the establishment of the Homeland Defense office in 2003, the Secretary of Defense called for development of a chartering DOD Directive within 45 days to formalize the responsibilities of the new Assistant Secretary and clarify the relationship between Homeland Defense and other offices, such as Special Operations and Low Intensity Conflict. However, according to officials in Homeland Defense, the chartering directive was never formalized because of problems in coordinating with the many DOD offices involved, the continuing evolution of their responsibilities, and the focusing of resources on developing the June 2005 Strategy for Homeland Defense and Civil Support. In June 2006, DOD’s Assistant to the Secretary of Defense for Nuclear, Chemical, and Biological Programs and the Joint Requirements Office also issued a study on installation protection confirming many of the problems identified earlier by us and the DOD Inspector General. This study was called for as a result of the funding cuts in the Guardian Installation Protection Program. The study pointed out that problems with the alignment of antiterrorism, chemical and biological defense, critical infrastructure protection, and other programs create difficulty in providing military installations with capabilities for all-hazard planning, preparedness, response, and recovery. The study also noted that DOD organizations were not developing guidance to sufficiently resolve problems related to inadequate policy, standards, and doctrine in these areas. Moreover, it also reported that despite agreement with our 2004 recommendation calling for designation of a single authority responsible for coordinating and integrating overall installation protection efforts, DOD still had not done so. This study made a series of recommendations designed to integrate and unify installation protection and emergency preparedness programs and concepts. This study also developed a plan to improve installation protection at DOD facilities, recommending that some $560 million be provided for installation protection improvements over 4 years, with priority given to overseas facilities. However, the amount of funding approved by DOD was sufficient only for the lowest levels of improvements and did not include funding for collective protection and chemical and biological detection improvements. At the close of our review in August 2006, DOD announced a new reorganization that will affect some of the organizations involved in installation protection activities. The need for reorganization was identified in the February 2006 Quadrennial Defense Review Report as necessary to respond to the changing security threat by reshaping DOD offices to better support the warfighting combatant commands and respond to the new threat environment. According to DOD officials, the specific policy and organizational changes that will result from the reorganization will develop over the coming months. Program fragmentation can also prevent DOD from reaching agreement in prioritizing facilities for protection funding. A long-standing series of directives and instructions, as well as DOD’s June 2005 “Strategy for Homeland Defense and Civil Support,” have recognized the importance of prioritizing installations in light of constrained resources and called on DOD to identify critical infrastructure and to prioritize these assets for funding improvements. Accordingly, early in our review, we requested a list of critical overseas facilities from the Directors for Critical Infrastructure Protection and Combating Terrorism, Office of the Assistant Secretary of Defense for Homeland Defense, as well as from other offices throughout DOD and the military services. However, DOD was unable to provide us with such a list. According to DOD officials, there are a variety of listings of critical facilities and other infrastructure. However, each is compiled from the limited perspective of the military service or other DOD organization responsible for the asset, and not from an overall DOD strategic perspective. According to these officials, gaining agreement on DOD-wide priorities is difficult because of the fragmented organizational structure, as well as policy and program changes following September 11, 2001. According to the May 2006 DOD Inspector General report, a lack of stable funding and centralized prioritization and oversight for critical infrastructure improvements has created problems throughout the combatant commands. According to the report, determining which assets were critical depended on mission requirements that varied with the level of command. Thus, an effort to protect an asset critical to a combatant commander from his or her warfighting perspective could receive a low priority from an installation commander who may be focused on a different, non-warfighting perspective. Similarly, DOD’s June 2006 study of installation protection was directed to create a prioritized list of installations to receive funding for protective measures, but was unable to do so. According to the report, it could not develop the list because of the short time frame allowed for completion of the study and the controversial nature of installation prioritization. In recognition of this problem, we sent a letter to the Secretary of Defense in November 2005 requesting clarification of the situation and DOD actions to correct the problem (see app. II). DOD’s response (see app. III) acknowledged the importance of prioritizing its critical assets and stated that it published DOD Directive 3020.40, Defense Critical Infrastructure Program, in August 2005. This directive called for the development of policy and program guidance for DOD-wide critical infrastructure, including criteria and methodology to identify and prioritize these assets. At the time of our review, this effort was being conducted through the Defense Critical Infrastructure Protection Program under the Office of the Assistant Secretary for Homeland Defense. In addition, this office was also directed to conduct an assessment of all of the activities that contribute to the department’s ability to achieve mission assurance to identify program gaps and other problems that could interfere with mission assurance. According to program officials, the framework for prioritizing DOD’s critical infrastructure was expected to be published in August 2006 but has not yet been formally adopted. It is unclear when the assessment of program gaps will be completed. It is also unclear to what extent the Assistant Secretary for Homeland Defense will address aspects of critical infrastructure protection related to overseas facilities identified as critical to warfighting missions. As we and others have observed for several years, notwithstanding the emergence of adversaries that can use chemical and biological weapons, the fielding of collective protection equipment at both critical overseas fixed facilities and major expeditionary warfighting assets remains limited and inconsistent. Assessing the need and priority for such equipment is difficult because of the significant uncertainties in the intelligence about the nature of the chemical and biological threat. While the intelligence community recognizes the need to assess and communicate these uncertainties about the chemical warfare threat, this information has not been available to the agencies that need it. Specifically, the intelligence community, under the leadership of the Director of National Intelligence, has not been able to complete an up-to-date National Intelligence Estimate on chemical warfare in part due to changing assessment and communication policies, as well as issues surrounding the basis or evidence for the assessments. In our view, an updated chemical warfare National Intelligence Estimate is needed to provide a critical input and basis for decisions on investments in chemical warfare defenses, including collective protection. Uncertainty about the threat can lead to resources being invested in assets where they may not be needed. Conversely, not providing collective protection where it may be needed can place military personnel and operations at increased risk. In addition, allowing the current fragmented and disjointed framework for managing installation protection policies to continue without agreed-upon priorities for funding or clear requirements and service guidance on the appropriate use of collective protection, further increases the likelihood that limited DOD resources will be used inefficiently and ineffectively. DOD’s ongoing reorganization provides a good opportunity to review the policy and programmatic gaps and inconsistencies, gain the agreement of the competing organizations needed to integrate the policies and operating concepts, and correct the long-standing need for an overarching authority in this area. In light of the need for the most current intelligence estimates to help guide the government’s—including DOD’s—risk assessments and investment decisions, we are recommending that the Director of National Intelligence identify the impediments interfering with his ability to update the chemical warfare National Intelligence Estimate, and take the necessary steps to bring the report to issuance. To ensure that the problems in the overall installation protection and collective protection policies and programs do not continue to place military personnel and operations at increased risk and undercut the efficiency and effectiveness of DOD resource allocations, we are recommending that the Secretary of Defense—as part of the ongoing reorganization—take the following four actions to ensure better coordination and integration of these activities and clearer guidance on key operating concepts. To ensure better coordination and integration of the overall installation protection activities, we are recommending that the Secretary of Defense designate a single integrating authority with the responsibility to coordinate and integrate worldwide installation preparedness policies and operating concepts and assign this single authority with the responsibility to oversee efforts to gain DOD-wide agreement on criteria for identifying critical facilities and to develop a system for prioritizing critical facilities and infrastructure for funding protection improvements. To help ensure clear and consistent guidance in the chemical and biological collective protection program, we are recommending that the Secretary of Defense direct the Joint Staff and military services to develop clear and consistent criteria to guide overarching strategic decisions on the use of collective protection at DOD facilities, including issues such as whether decisions on the need for collective protection should be prescribed or left to commanders’ discretion, the use of integrated overpressure and filtration systems versus portable structures, and what mission functions must be protected, and direct the Joint Staff and military services to review their current policies and, where appropriate, develop consistent requirements on when collective protection is required for medical units, and naval, ground, and air forces. In written comments on a classified version of our draft report, DOD and the Director of National Intelligence both generally agreed with all five of our recommendations. Their unclassified comments on the classified version are reprinted in appendices IV and V. DOD also provided technical comments, which we incorporated as appropriate. Regarding our first recommendation that the Director of National Intelligence identify the impediments interfering with his ability to update the chemical warfare National Intelligence Estimate, and take the necessary steps to bring the report to issuance; the Director’s office stated that the National Intelligence Council began the process of developing that estimate several months ago, and expects the update to be published in early 2007. In this regard, DOD also called for the Director of National Intelligence to prepare an integrated, worldwide chemical, biological, radiological, nuclear and high-yield explosive threat assessment. DOD stated that current assessments are fragmented and not consistent across geographic areas. We agree that better coordinated and integrated threat assessments, consistent across geographic regions could help improve DOD’s decisions regarding investments in the security needs of U.S. forces worldwide. We encourage DOD to make this recommendation directly to the Director of National Intelligence. DOD also concurred with our second recommendation that the Secretary of Defense designate a single integrating authority with the responsibility to coordinate and integrate worldwide installation preparedness policies and operating concepts. DOD acknowledged that as currently practiced, installation preparedness is not a formal program within the department. DOD also noted that while it agreed with our recommendation, it believed that the combatant commanders should be responsible for their respective areas of responsibility and determine collective protection requirements based on operational needs. We agree that the combatant commanders should have flexibility to recognize special operational needs in the fielding of collective protection in their areas of responsibility. However, as our report clearly points out such determinations should take place within an agreed-upon, coordinated, and integrated framework of DOD- wide installation preparedness policies and operating concepts. DOD partially concurred with our third recommendation, that the integrating authority discussed in our second recommendation also be given responsibility to oversee efforts to gain DOD-wide agreement on criteria for identifying critical facilities and for developing an overall prioritized list of critical facilities and infrastructure for funding protection improvements. The department agreed with our recommendation to assign oversight responsibility to a single integrating authority; however, it suggested that rather than develop an overall prioritized list, DOD should develop a “system” to prioritize the critical facilities for funding protective improvements. DOD stated that this “system” to prioritize facilities does not have to be a list “from 1 to n”, but instead may be tiers or bands of assets based on the strategic impact if that asset was lost or degraded, using the all hazards approach to vulnerability assessments. We agree that the identification of prioritized tiers or types/bands of assets could satisfy DOD’s needs in this area, if done appropriately. However, we believe the danger with this approach is the identification of tiers or types of assets so broad as to invite continued disagreement and gridlock, leaving the situation essentially unchanged. Nonetheless, to provide the department with flexibility to implement this key action as a system, we adjusted our recommendation to reflect DOD’s suggestion. DOD concurred without comment with our fourth and fifth recommendations that the Secretary of Defense direct the Joint Staff and Military Services to develop clear and consistent criteria to guide overarching strategic decisions on the use of collective protection; and that those offices review their current policies and develop consistent requirements on the use of collective protection at medical units, and naval, ground, and air forces. As we agreed with your office, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies of this report to the Secretary of Defense, the Director of National Intelligence, and to interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To examine the current assessments of chemical and biological threats to Department of Defense facilities located overseas, we interviewed intelligence officials from a variety of national and DOD intelligence organizations, and reviewed briefings and other intelligence products generated by these organizations. Specifically, we met with officials from the Central Intelligence Agency, Defense Intelligence Agency, and National Ground Intelligence Center and DOD intelligence officials from each of the four regional combatant commands with critical overseas facilities located in their area of operations. During our meetings, we obtained detailed briefings and other intelligence products, which described the nature and likelihood of a chemical or biological attack on U.S. troops and installations, as well as other documents that described the capabilities of terrorist organizations and adversarial nation states. Although we could not independently verify the reliability of the information, we obtained explanations of the basis for the assessments from intelligence analysts and other officials. We also requested access to and briefings on the most recent national intelligence estimates for both chemical and biological threats from the Office of the Director of National Intelligence. Although the office provided us with the latest intelligence estimate on biological warfare, we were unable to obtain the latest national intelligence estimate on chemical warfare. At the close of our review in August 2006, the estimate remained in draft status and we were unable to schedule a briefing with officials to discuss its contents. To determine the levels of collective protection provided to critical overseas facilities we worked with several DOD offices, first to develop criterion needed to determine which DOD sites were considered critical, and second, to identify the type and amount of any collective protection equipment at each site. During the time of our review DOD had not developed an overall agreed-upon methodology and listing of facilities considered to be critical. As a result, we were required to develop our own criterion for the purposes of this review. To develop this criterion we reviewed existing DOD regulations and discussed potential options with officials from a variety of DOD offices, including the Defense Critical Infrastructure Program, the Joint Staff Office for Antiterrorism and Homeland Defense, the Joint Requirements Office, the Joint Program Manager for Collective Protection, and the Guardian Installation Protection Program office. The criterion called for DOD to identify those sites that must remain operational to complete its mission during a chemical or biological event, such as command and control nodes, rest and relief areas, emergency medical locations, and intelligence sites, and where there would be no capability to transfer the function or capability to an alternate location. The Joint Staff then assisted us by forwarding our criterion to the regional combatant commanders for the U.S. Central, European, Pacific, and Southern Commands, and requesting that they identify their critical facilities and the type and amount of any collective protection equipment currently located at those sites. Our method of quantifying the critical sites counted the number of individual buildings identified as critical sites on DOD installations, when identified separately by DOD officials. Following receipt of the responses from the combatant commands, we verified the accuracy of those lists with officials from each command. To determine the levels of collective protection provided to major expeditionary warfighting assets, such as ground forces, naval vessels, and aircraft, we reviewed DOD’s Annual Report on Chemical and Biological Defense Programs and interviewed contractors and officials from each service component, the Tank and Automotive Command, and the Joint Program Executive Office for Chemical and Biological Defense to obtain detailed listings of the type and amount of collective protection equipment currently fielded by each service component. Once we obtained these listings, we verified the information with officials from each service and the Joint Program Executive Office. Based on these efforts and our discussions with department and military service officials, we believe that the information we obtained is sufficiently reliable for the purposes of this report. To examine DOD’s framework for managing overall installation protection activities and for prioritizing critical installations for funding, we reviewed applicable regulations, policies, and prior GAO and DOD reports and interviewed officials from a variety of DOD offices responsible for program management and oversight. Specifically, we met with officials from the following offices: Office of the Assistant Secretary of Defense for Homeland Defense, Office of the Assistant Secretary of Defense for Special Operations and Office of the Assistant to the Secretary of Defense for Nuclear and Chemical and Biological Defense Programs Joint Program Executive Office for Chemical and Biological Defense Joint Requirements Office for Chemical, Biological, Radiological and Joint Staff, Anti-Terrorism/Homeland Defense Office of the Inspector General Regional combatant commands (Central Command, European Command, Pacific Command, and Southern Command) Military service components (Army, Navy, Air Force, and Marine Corps) Defense Threat Reduction Agency U.S. Army Chemical School We conducted our review from September 2005 through August 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, William Cawood, Assistant Director (retired); Robert Repasky, Assistant Director; Lorelei St. James, Assistant Director; Shawn Arbogast; Angela Bourciquot; Grace Coleman; Jason Jackson; John Nelson; Rebecca Shea; Karen Thornton; and Cheryl Weissman also made key contributions to this report. Defense Management: Additional Actions Needed to Enhance DOD’s Risk-Based Approach for Making Resource Decisions. GAO-06-13. Washington, D.C.: November 15, 2005. Combating Terrorism: DOD Efforts to Improve Installation Preparedness Can Be Enhanced with Clarified Responsibilities and Comprehensive Planning. GAO-04-855. Washington, D.C.: August 12, 2004. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999.
For the military to operate in environments contaminated by chemical and biological warfare agents, the Department of Defense (DOD) has developed collective protection equipment to provide a protected environment for group activities. GAO previously reported persistent problems in providing collective protection for U.S. forces in high threat areas overseas. In this report, GAO examined (1) current intelligence assessments of chemical and biological threats, (2) the extent to which DOD has provided collective protection at critical overseas facilities and major expeditionary warfighting assets, and (3) DOD's framework for managing installation protection policies and prioritizing critical installations for funding. In conducting this review, GAO developed criteria to identify critical sites in the absence of a DOD priority listing of such sites in overseas high threat areas--areas at high risk of terrorist or missile attack. The intelligence community is struggling with the changing security environment and communicating the uncertainties in the quality of chemical and biological threat information. Generally, the two key chemical and biological threats facing DOD forces are from hostile nations using missiles, or terrorist groups (e.g., Al Qaeda) using devices to release chemical or biological agents. DOD expects these threats to grow. The intelligence community has recognized the need to communicate more candidly about the uncertainties in intelligence regarding the type and amount of agents, the number of missiles likely armed with chemical and biological warheads, and the method of dissemination. Communicating these uncertainties helps in understanding the actual threat posed by our adversaries and in making risk management decisions on investments. However, while the intelligence community, under the Director of National Intelligence, has issued a new 2006 intelligence estimate regarding the uncertainties in the biological warfare threat, it has not issued an update on the chemical warfare threat since 2002 due to evolving assessment and communication policies. Despite the growing threat, collective protection at both critical overseas facilities and in some major expeditionary warfighting assets (e.g., infantry units, naval vessels, and medical units) is limited and inconsistent. Nearly 80 percent of overseas sites identified as critical by combatant commanders based on criteria GAO provided them, did not have collective protection equipment--including about two-thirds of the critical sites in high threat areas. At the same time, GAO found problems such as often vague and inconsistent guidance on the use of collective protection. DOD guidance encourages the use of collective protection but does not prescribe specific standards to guide strategic decisions on its use. Military service guidance, except the Air Force, was also vague and inconsistent on key issues such as (1) whether decisions on the need for the equipment should be left to local commanders' discretion, (2) when the various types of collective protection are most appropriate, and (3) what functions need to be protected. Thus, commanders have difficulty determining the need for collective protection. DOD's framework for managing collective protection and other related installation protection policies and activities is fragmented, which affects DOD's ability to ensure that collective protection resources are allocated efficiently and effectively. Prior GAO and DOD reports have highlighted continuing problems with fragmented policies and operating concepts among the many and varied programs and organizations involved. These problems result in unresolved conflict about issues, such as which critical facilities should receive priority for funding improvements, and make it difficult for DOD to balance competing warfighting and other needs and ensure that funding resources are prudently allocated. Previously, GAO and others have recommended DOD designate a single authority to integrate and coordinate installation protection policies and activities, and DOD agreed. However, despite a new ongoing reorganization, it has not yet done so.
PPACA established certain conditions governing participation in the CO-OP program. Specifically, PPACA defines a CO-OP as a health insurance issuer organized under state law as a nonprofit, member corporation of which the activities substantially consist of the issuance of qualified health plans in the individual and small group markets in the state where the CO-OP is licensed to issue such plans. PPACA prohibits organizations that were health insurance issuers on July 16, 2009, or sponsored by a state or local government, from participating in the CO-OP program. PPACA also requires that (1) governance of a CO-OP be subject to a majority vote of its members; (2) the governing documents of a CO-OP incorporate ethics and conflict of interest standards protecting against insurance industry involvement and interference; and (3) the operation of a CO-OP have a strong consumer focus, including timeliness, responsiveness, and accountability to its members. Consistent with PPACA, CMS established two types of CO-OP program loans: start-up loans and solvency loans. Start-up loans cover approved start-up costs including salaries and wages, fringe benefits, consultant costs, equipment, supplies, staff travel, and certain indirect costs. Disbursements were made according to a schedule established in the loan agreement between CMS and the loan recipient, and were contingent upon the loan recipient’s achievement of program milestones. Milestones included obtaining health insurance licensure and submitting timely reporting information in the required format. Each disbursement for a start-up loan must be repaid within 5 years of the disbursement date. Solvency loans assist CO-OPs in meeting states’ solvency and reserve requirements. CO-OPs may request disbursements of solvency loans “as needed” to meet these requirements and obligations under their loan agreement with CMS. Reasons for a CO-OP’s need for additional solvency disbursements could include enrollment growth or higher than anticipated claims from members. CO-OP requests are subject to CMS review of necessity and sufficiency. Each disbursement of a solvency loan must be repaid within 15 years of the disbursement date. PPACA appropriated $6 billion for the CO-OP program; however, a series of subsequent laws reduced the appropriation by about 80 percent and limited program participation. Specifically, in 2011, two separate appropriations acts rescinded $2.6 billion of the original CO-OP appropriation. Additionally, in January 2013, the American Taxpayer Relief Act of 2012 rescinded $2.3 billion in unobligated CO-OP program appropriations, and as a result, about $1.1 billion of the original appropriation was available for the costs associated with the $2.4 billion in loans awarded and program administration. The American Taxpayer Relief Act of 2012 transferred any remaining appropriations to a contingency fund for CMS to provide assistance and oversight to CO-OP loan awardees, which meant that no additional CO-OPs could be funded through the CO-OP program. The participation of CO-OPs in states’ health insurance exchanges has varied since their establishment: For 2014, 22 CO-OPs offered health plans on the health insurance exchanges of 22 states. One CO-OP participated in both the Iowa and the Nebraska exchanges, and two CO-OPs offered health plans on the exchange in Oregon. The CO-OP for Ohio offered plans off the exchange, but did not participate in the state’s exchange. For 2015, 22 CO-OPs offered health plans on the exchanges of 23 states. While the Ohio CO-OP participated in the exchange for Ohio for the first time, the CO-OP that offered plans on both the Iowa and the Nebraska exchanges withdrew from participation. In addition, the CO-OPs in Maine and Massachusetts both expanded to the New Hampshire exchange and the CO-OP from Montana expanded to the Idaho exchange. For 2016, 11 CO-OPs continued to offer health plans on the exchanges of 13 states as of January 4, 2016. The CO-OPs that offered health plans in Arizona, Colorado, Kentucky, Louisiana, Michigan, Nevada, New York, South Carolina, Tennessee, and Utah, and one of the CO-OPs that offered health plans in Oregon, ceased operations on or before January 1, 2016. (See fig. 1.) CMS awarded the 11 CO-OPs that continued to operate as of January 4, 2016, about $1.2 billion in combined start-up and solvency loans, and awarded about the same amount to the 12 CO-OPs that ceased operations. For the 11 CO-OPs that continued to operate, CMS disbursed, as of November 2015, about $897 million (74 percent) of the CO-OP program loans awarded. Specifically, it disbursed 100 percent of the loans awarded to 2 CO-OPs, and from 57 percent to 91 percent of the loans awarded to the other 9 CO-OPs. This range primarily reflects differences in the percentage of solvency loan awards disbursed to each CO-OP, as disbursements of the start-up loan awards totaled nearly 100 percent. Disbursements of solvency loan awards to the 9 CO-OPs that received less than 100 percent of their awards ranged from 49 percent to 89 percent. For the 12 CO-OPs that ceased operations, CMS had disbursed 100 percent of the loan awards to 8 CO-OPs, while the percentage disbursed to the other 4 CO-OPs ranged from 84 percent to 98 percent. (See fig. 2.) CMS and state regulators have different, but complementary, roles for the CO-OP program. As the agency that administers the CO-OP program, CMS is responsible for interpreting statutory requirements and issuing regulations regarding CO-OP program eligibility, standards, and loan terms; soliciting and approving loan applications of qualified applicants; determining loan award amounts and negotiating the related loan establishing and updating CO-OP program policy, procedures, and approving the disbursement of loan funds to CO-OPs; and monitoring CO-OP financial controls and compliance with applicable statutory requirements and related regulations, loan agreements, and CO-OP program policy and guidance. While CMS has oversight responsibilities for the CO-OP program, state regulators have primary oversight authority of the CO-OPs as health insurance issuers. This authority includes issuing and revoking licenses to offer health plans, monitoring issuers’ financial solvency and market conduct, as well as reviewing and approving premium rates and policy and contract forms. CMS requires CO-OPs to report any requirements from and meetings with state regulators regarding their oversight to CMS. In addition, according to a CMS official, the agency has coordinated oversight activities with state regulators when appropriate. PPACA established rules governing how issuers, including CO-OPs, may set premium rates. For example, while issuers may not consider gender or health status in setting premiums, issuers may consider family size, age, and tobacco use. Also, issuers may vary premiums based on areas of residence. States have the authority to use counties, Metropolitan Statistical Areas, zip codes, or any combination of the three in establishing geographic locations across which premiums may vary, known as rating areas. The number of rating areas per state varies, ranging from a low of 1 to a high of 67. Most states have 10 or fewer rating areas. PPACA also requires that coverage sold include certain categories of benefits at standardized levels of coverage specified by metal level— bronze, silver, gold, and platinum. Each metal level corresponds to an actuarial value—the proportion of allowable charges that a health plan, as opposed to the consumer, is expected to pay on average. Health plans within a metal level have the same actuarial value, while plans from different metal levels have different actuarial values and pay a higher or lower proportion of allowable charges. For example, a gold health plan is more generous overall than a bronze health plan. Actuarial values for health plans under PPACA range from 60 to 90 percent by metal level as follows: bronze (60 percent), silver (70 percent), gold (80 percent), or platinum (90 percent). Issuers may also offer “catastrophic” health plans to individuals under 30 and individuals exempt from the individual mandate. Catastrophic plans have actuarial values that are less than what is required to meet any of the other metal levels. Although these plans are required to cover three primary care visits and preventive services at no cost, they generally do not cover costs for other health care services until a high deductible is met. Some PPACA provisions, such as those that prohibit issuers from considering gender and health status in setting premiums and from denying coverage based on health status, reduced issuers’ ability to mitigate the risk of high-cost enrollees. To limit the increased risk that issuers could face, PPACA also established three risk mitigation programs: a permanent “risk adjustment” program and two temporary programs, “reinsurance” and “risk corridors”. Each of these programs uses a different mechanism intended to both improve the functioning of the health insurance markets and stabilize the premiums that issuers charge for health coverage. For example, the risk adjustment program transfers funds from issuers with lower risk enrollees to those with higher risk enrollees, and the risk corridor program transfers funds from issuers with high profits to those with high losses. Since it began awarding CO-OP loans, CMS’s oversight has evolved from monitoring the establishment of the CO-OPs to monitoring their performance and sustainability. CMS also refined its monitoring activities by formalizing a framework for responding to issues at specific CO-OPs, and it continues to adjust its monitoring as some CO-OPs have ceased operations. CMS’s initial activities to monitor the CO-OPs, starting when it began awarding CO-OP loans in early 2012, tracked their progress in becoming health insurance issuers (for example, establishing provider networks, arranging appropriate office space, and filling key management positions) and their compliance with program requirements (for example, establishing governance subject to a majority vote of its members and incorporating ethics and conflict-of-interest standards). During this initial period, CMS established two core monitoring activities to be conducted by a CMS account manager—a primary point of contact at CMS who is responsible for the day-to-day monitoring of individual CO-OPs. These two core activities were Routine teleconferences with CO-OPs. The account manager participated in routine teleconferences with key stakeholders from each CO-OP. Key CO-OP stakeholders could have, for example, included the chief executive officer, chief financial officer, chief operating officer, or the chief information officer. CMS policy initially required that these meetings take place on at least a bi-weekly basis. According to CMS officials, the frequency of these meetings varied across CO-OPs depending on the progress demonstrated by the CO-OP. Items discussed during these meetings could have, for example, included the CO-OP’s implementation of its business plan or progress in achieving the milestones of its disbursement schedule, as well as any challenges, issues, concerns, and questions the CO-OP had. CMS account managers maintained documentation of these teleconferences electronically. Standard reporting. CMS required each CO-OP to submit standard reports that provide financial and other performance related information. (See table 1.) CMS account managers tracked the timely submission and completeness of each report. Reports submitted by the CO-OP were maintained electronically for CMS officials to review, as needed. In addition, CMS hired an independent auditor to review each CO-OP’s compliance with its loan agreement; key federal and state requirements, such as those related to governance of the CO-OP, the use of loan funding, types of investments; and the documentation that supported financial reporting. CMS officials stated that these reviews were completed in 2013 and 2014. According to officials, CMS used the information obtained from these initial monitoring activities to assess loan recipients’ progress in establishing start-up health insurance issuers and compliance with CO-OP program requirements. From the time loans were granted through November 2014, if there was a problem that presented a significant risk to a recipient’s viability or a pattern of noncompliance with program requirements, CMS required an improvement plan. CMS policy states that an improvement plan could include (1) a corrective action plan to resolve noncompliance with program requirements or the terms and conditions of a loan agreement; (2) an enhanced oversight plan requiring stronger and more frequent CMS review of operations and financial status; (3) technical assistance to help improve performance, meet program requirements, or fulfill terms and conditions of the loan agreement; or (4) withholding of loan disbursements until milestones were achieved. According to CMS officials, the agency required improvement plans for five different CO-OPs during this time period. Officials stated that these plans generally focused on issues with meeting start-up milestones, including the CO-OP’s capability to obtain licensure or comply with program requirements when establishing contractual relationships with providers or vendors for necessary services, such as information technology. As CO-OPs began enrolling members, CMS supplemented its initial monitoring activities with additional tools to evaluate CO-OP performance and sustainability. CMS also formalized a framework for responding to financial or operational issues identified at specific CO-OPs and enhanced its reporting requirements to support the newly developed tools. CMS officials told us that they expect to monitor CO-OPs that have ceased operations to the extent possible. CMS developed two tools that analyze enrollment and financial data, and other information collected from the CO-OPs: Direct analysis. CMS officials developed a tool to analyze various aspects of performance, including enrollment, net income, premium revenues, claims and administrative expenses, and financial information related to risk mitigation programs and reserves. According to CMS officials, they conduct this analysis on a quarterly basis and compare the information with CO-OP projections and—when possible—to industry benchmarks. According to CMS officials, if direct analysis indicates that an individual CO-OP deviates appreciably from projections or otherwise signals a potential difficulty, then CMS officials perform additional review and analyses. CMS officials also noted that the direct analysis may, at times, be focused on particular areas of concerns. For example, during 2015, CMS looked closely at the CO-OPs’ expectations related to risk mitigation programs: CMS officials monitored the extent to which each CO-OP’s financial projections relied on estimated payments from risk mitigation programs. CMS officials told us that because of these analyses, they were able to identify CO-OPs that would likely face increased financial difficulties when the agency announced on October 1, 2015, that issuers eligible for payments through the risk corridor program would likely receive only a portion—12.6 percent—of the total amounts they claimed. CMS officials told us that they worked with these CO-OPs to address concerns associated with these payments. Risk assessment. CMS also developed a tool to assess risk based on data collected through its established monitoring activities. CMS officials told us that they use this tool on a quarterly basis to assess risk across seven factors: 1. Long-term sustainability. CMS assesses risk based on whether a CO-OP expects to break even financially by 2017 and, if so, the extent to which a CO-OP expects to repay start-up loans while maintaining required reserve levels. CMS officials told us that although some viable CO-OPs might not expect to break even by 2017, they selected this date, in part, to provide a common basis for developing a risk score, because the first repayments of CO-OP loans are due in 2017. 2. Working capital. CMS assesses risk based on whether a CO-OP expects to generate net revenues from premiums, risk mitigation programs, or other funding sufficient to cover operating expenses over the next 12 months and, if not, the extent to which the CO-OP plans to rely on the disbursement of any remaining solvency loan funds. 3. Profitability. CMS assesses risk based on whether the CO-OP’s performance is consistent with the projections in its business plan. This risk category does not measure current profitability. 4. Compliance with state requirements. CMS assesses risk based on whether a state department of insurance determined that a CO-OP was non-compliant with state requirements and, if so, the extent to which remedial action has been implemented. CMS also considers whether the CO-OP has had a history of non-compliance and the severity of any regulatory action taken by a department of insurance. 5. Compliance with CO-OP program requirements. CMS assesses risk based on whether the agency has determined that a CO-OP was non- compliant with CO-OP program loan terms and provisions and, if so, the extent to which the CO-OP has been responsive to CMS officials’ requests. CMS also considers whether the CO-OP experienced any legal compliance issues that would affect participation in the program. 6. CO-OP management. CMS assesses risk based on whether the agency identified conflicts of interest with CO-OP management and performance concerns including high turnover, fraud, or a lack of appropriate internal controls. 7. CO-OP infrastructure issues. CMS assesses risk based on whether the agency identified concerns involving the CO-OP’s key operating systems—including claims, enrollment and billing, customer service, and utilization management. For quantitative factors included in the risk assessment, CMS officials told us they compare individual CO-OP data to benchmarks and assign a risk level (high, medium-high, medium, and low) based on the extent of deviation from the benchmarks. For qualitative factors, CMS officials told us they assign CO-OPs a risk level based on responses to a standard set of questions completed by account managers. To help ensure the most current data are available to be used in the direct analysis and risk assessment tools, CMS enhanced certain reporting requirements associated with the core monitoring activities it previously established. While the agency continues to require routine teleconferences with CO-OPs and standard reporting, CMS enhanced its initial reporting requirements to include submission of enrollment and selected financial data on a monthly basis rather than on a quarterly basis. CMS also now requires CO-OPs to provide certain financial projections quarterly rather than annually. To respond to issues identified at individual CO-OPs using the direct analysis and risk assessment tools, as well as its other monitoring activities, in November 2014, CMS formally established a framework, known as an escalation plan, for evaluating and responding to concerns. The identification of an issue at a CO-OP is the first of four steps described in the written guidance for establishing and implementing the escalation plan. (See fig. 3.) Issue identification. CMS initiates the escalation plan when the agency identifies an issue of potential concern at a CO-OP. Identification may be based on information obtained through a variety of sources, including internal channels (e.g., the core monitoring activities, direct analysis, and risk assessments described above) and external channels (e.g., communication with state regulators). Issue assessment. A CMS account manager conducts a preliminary assessment of the severity, urgency, and nature of the identified issue. Using a standard set of questions, the account manager assesses the issue in light of five sets of considerations: (1) whether the issue was self- reported by the CO-OP and the frequency with which the CO-OP experienced the same or other issues, (2) the potential impact on the CO-OP’s state licensure and exchange participation, (3) the potential impact on the CO-OP’s approved business plan, (4) the potential impact on the CO-OP’s compliance with program requirements, and (5) the potential impact on the CO-OP’s members and markets where it participates. Answers to questions about these considerations result in a score that indicates whether the issue’s severity and urgency is of minor, moderate, elevated, or greatest concern. The account manager then refers the preliminary assessment for review and approval by other CMS officials, including a team that has responsibility for evaluating CO-OP program integrity. Enforcement action. CMS determines an enforcement action based on the final assessment of the issue as of minor, moderate, elevated, or greatest concern. Enforcement actions generally require a corresponding response from the CO-OP to resolve the issue. If the CO-OP’s response to an enforcement action does not result in an acceptable resolution to an issue, the agency may elevate the assessment to a higher level and require additional responses from the CO-OP. Minor. CMS communicates with CO-OP officials to resolve the issue and prevent a recurrence. Examples of issues that might be assessed as minor—if no other issues were identified—would be challenges in submitting a required report or a divergence of less than 20 percent between the CO-OP’s actual enrollment and its most recently projected enrollment. Moderate. CMS sends a formal written notice of the issue, known as a warning letter, to CO-OPs that have an issue assessed as a moderate concern. In response, CO-OP officials are required to submit evidence of the development and implementation of a plan to resolve the issue. As of November 9, 2015, CMS had issued warning letters to 11 CO-OPs, of which 7 continue to operate. According to CMS officials, issues for which CMS issued warning letters included the execution of a contract that is core to the CO-OP’s business activity (e.g., a contract for a top executive) without the requisite prior CMS approval, and the submission of incomplete data for one of the risk mitigation programs. Elevated. CMS sends CO-OPs a formal written notice that a corrective action plan is required, an enhanced oversight plan will be implemented, or both. According to CMS officials, they generally require the CO-OP to develop a corrective action plan when they determine that the CO-OP can take action to address the issue and that the action and its effect can be documented; the corrective action plan is subject to CMS approval and monitoring. CMS officials implement an enhanced oversight plan when the issue is urgent or has the potential to become more severe. In response to an enhanced oversight plan, a CO-OP may be required to submit additional reports or may be subjected to additional audits. As of November 9, 2015, CMS had required corrective action plans or implemented enhanced oversight plans (or both) for 15 CO-OPs, of which 8 continue to operate in 2016. Issues for which these were required include CO-OPs failing to comply with state laws and experiencing high enrollment and significant losses. CMS noted that some of the corrective action plans and enhanced oversight plans were the result of unresolved issues that required stronger enforcement actions. Greatest. CMS sends CO-OPs a formal written notice, and if a correction action plan and/or enhanced oversight plan cannot resolve the issue, CMS may consider terminating the CO-OP from the program or taking other enforcement measures, such as withholding loan disbursements. As of November 9, 2015, CMS officials had identified an issue of greatest concern at two CO-OPs. For one CO-OP, it required a corrective action plan, and for the other CO-OP, it issued a termination letter. CMS officials noted that these two CO-OPs had issues involving serious and pervasive management problems or financial losses substantial enough to question the CO-OP’s sustainability. Both CO-OPs ceased operations on, or before, January 1, 2016. Resolution. CMS monitors the CO-OP’s progress for resolving an identified issue through status calls, additional reporting requirements, or other actions as appropriate. For some issues determined to be of elevated or greatest concern, CMS may conduct an on-site visit. If CMS determines that an issue has been resolved, CMS returns to a more routine level of monitoring, mindful of the history that the CO-OP had with the issue. If the problem is not resolved, or if the process of investigating an issue reveals other issues, CMS can re-assess the issue and take further actions, and it has done so with several CO-OPs. As already noted, CMS may ultimately determine that a satisfactory resolution is not likely and therefore pursue the option to terminate its loan agreement with the CO-OP. As of November 1, 2015, CMS had issued one termination letter following use of the escalation plan. Escalation Plan Case Study: Louisiana Health Cooperative, Inc. CMS officials learned in December 2014, through routine communication with the CO-OP and the Louisiana Department of Insurance (LDI), that LDI was preparing to notify the CO-OP that it had been found in a condition that would render continuance of its business hazardous to policyholders, creditors, or others. CMS had previously noted certain risks with the CO-OP’s finances. CMS assessed the issue as an elevated concern and issued a letter in January 2015 requiring the CO-OP to provide information and a corrective action plan. The CO-OP responded in February 2015, citing problems with its third-party administrator—an entity with which the CO-OP had contracted to process claims—and describing its corrective action plan. CMS determined that the plan was not sufficient and issued a letter in March 2015 requesting revisions. The CO-OP submitted a revised corrective action plan, which CMS officials also found insufficient. Meanwhile, in response to LDI, the CO-OP submitted updated enrollment and financial data, which led CMS to question whether enrollment was sufficient for financial stability. CMS issued another letter in April 2015, asking for information and a corrective action plan to address these issues and stating that CMS would conduct a site visit. During that visit, CMS officials observed a number of serious and pervasive deficiencies. In response, CMS reassessed the issue as one of greatest concern and issued a letter in June 2015, summarizing its findings and stating that a complete and quick resolution was necessary to avoid termination of the loan agreement; the letter included specific milestones and dates. The CO-OP’s board met in July and decided to cease operations by the end of 2015. According to CMS officials, the agency continues to monitor and oversee the CO-OP as the CO-OP and LDI work to cease operations with as few negative consequences as possible. In addition to developing the tools to evaluate performance and sustainability and the escalation plan, CMS formed a committee that, according to CMS officials, is to look at the CO-OP program as a whole— beyond individual issues or CO-OPs. The committee is to identify and address risks to, and concerns about, the program and make recommendations to address any risks or concerns identified. CMS officials told us that the committee consists of officials from across the agency with actuarial, health insurance, financial, legal, and health insurance exchange experience and expertise. CMS is also using an independent auditor to conduct another review of CO-OPs, focusing on compliance and financial management. A preliminary audit phase was conducted to determine whether each CO-OP had established and documented controls and processes for five key areas, in accordance with the NAIC Market Conduct Examination Standards: (1) claims, (2) policyholder service, (3) complaint handling, (4) provider credentialing, and (5) marketing and sales. Based on the results of the preliminary phase, the auditor is to perform one of two types of reviews—a general review or a focused review—at each CO-OP; a more focused review is to be performed at CO-OPs that did not appear to have initially met the NAIC Market Conduct Examination Standards. CMS officials told us that the preliminary phase was completed in June 2015, and that the second phase is on-going and is expected to be completed by the middle of 2016 for the 11 CO-OPs that continued to operate as of January 4, 2016. CMS officials told us that prior to the start of the 2016 open enrollment period, they assessed the CO-OPs with particular attention to their sustainability through 2016. According to CMS officials, they worked with CO-OPs and states’ departments of insurance to address concerns relating to CO-OP sustainability. The goal of these efforts was to provide some assurance that CO-OPs with serious financial or operational difficulties (or both) took timely and effective action to address those difficulties or made plans to cease operations before the 2016 open enrollment period, which began on November 1, 2015. In addition, CMS officials told us that, to the extent possible, they plan to monitor CO-OPs that have ceased operations. When a CO-OP closes, the state’s department of insurance takes the lead responsibility in winding down operations. CMS officials told us that their goal is to work with the CO-OPs and their states’ departments of insurance to bring operations to an end in a way that minimizes negative effects on members, as well as to recover program loan funding to the extent possible. Our analysis showed that in most of the 20 states where CO-OPs offered health plans on the exchange during both the 2014 and 2015 open enrollment periods, the state-wide average monthly premium for a 30-year-old individual to purchase a CO-OP silver health plan was lower for 2015 than for the previous year. Specifically, there were 14 states where the state-wide average monthly premium for silver plans offered by CO-OPs decreased, with decreases ranging from $1.47 per month in Kentucky to $180.44 per month in Arizona. In 9 of these states, the decrease in the state-wide average premium was more than $30 per month. Of the 6 states where the state-wide average premium for silver plans offered by CO-OPs increased, the increases did not exceed $20 per month. As table 2 shows, the pattern of changes in average premiums for CO-OPs that continued to operate as of January 4, 2016, is similar to the pattern of change for CO-OPs that have ceased operations. Of the 11 states where CO-OPs no longer operate, 5 had decreases in the CO-OP’s average monthly premium of more than $30, while the other 6 had increases or decreases less than $30. In the 10 states where CO-OPs continued to operate as of January 4, 2016, 4 had decreases in the CO-OP’s average monthly premium of more than $30, while the other 6 had increases or decreases of less than $30. For 2016, the state-wide average premiums for silver health plans increased from 2015 in 8 of 10 states where CO-OPs continue to operate. (See appendixes II through XIV for more details on the range of premiums in 2014, 2015, and 2016 for silver health plans in the states where CO-OPs continued operate as of January 4, 2016.) In the 23 states where CO-OPs offered health plans on the states’ health insurance exchanges in 2015, our analysis showed that the average monthly premiums for CO-OP health plans in all tiers were lower than the average monthly premiums for other health plans for 30-year-old individuals in most rating areas. CO-OPs offered bronze, silver, and gold tier health plans in 94 percent of the rating areas where they offered plans; they offered catastrophic and platinum tier health plans in fewer rating areas. For all five tiers, the average premiums for CO-OP health plans were lower than the average premiums for other health plans in more than 75 percent of ratings areas where both a CO-OP and at least one other issuer offered health plans. (See fig. 4.) As shown in figure 4, the average monthly premiums for CO-OP health plans in all tiers were lower than for other issuers in a higher percentage of rating areas in 2015 than in 2014. Moreover, the number of ratings areas where a CO-OP and at least one other issuer offered health plans, and the number of rating areas where the average monthly CO-OP premium was lower than the average monthly premium from other issuers both increased from 2014 to 2015. As shown in figure 5, we found this same pattern of premiums when we restricted our analysis to the states where CO-OPs continued to operate as of January 4, 2016. Although average CO-OP premiums for 30-year-old individuals were lower than those of other insurers in most rating areas, the percentage of rating areas where we found this difference varied substantially across states for silver health plans. In 10 states, the average monthly premium for CO-OP silver plans was lower than for other silver plans in 100 percent of the states’ rating areas. Of these 10 states, CO-OPs continued to operate in 7 as of January 4, 2016. In two states where the CO-OPs did not offer silver plans in each rating area, but continued to operate, the average premiums for CO-OPs were lower than for other issuers in all of the rating areas where the CO-OPs offered silver health plans. For five states, the average premium for CO-OP silver health plans was equal to or higher than for other silver plans in 50 percent of the rating areas or more. The percentage of rating areas where the average premium for CO-OP silver plans was equal to or higher than for other silver plans tended to be higher in the 11 states where CO-OPs no longer operate than in those where CO-OPs continued to operate as of January 4, 2016. (See fig. 6 and appendixes II through XIV for more details on how the CO-OPs were priced in relation to other health plans in each of the states where CO-OPs continued to operate as of January 4, 2016.) The 22 CO-OPs that participated in the 2015 open enrollment period together reported, as of June 30, 2015, enrollment of over 1 million— more than double the total enrollment reported at the same time the previous year. Specifically, the 22 CO-OPs gained 610,420 net new members, with all but one CO-OP experiencing an increase in enrollment. The 11 CO-OPs that continued to operate as of January 4, 2016, reported about 391,855 in enrollment in 2015—representing about 38 percent of the combined CO-OP enrollment. Increases in enrollment for these 11 CO-OPs ranged from 11,139 to 56,889. The 3 CO-OPs that reported the largest enrollment as of June 30, 2015, are among those CO-OPs that no longer operate. (See table 3.) Overall, our analysis showed that CO-OPs’ combined enrollment for 2015 exceeded their projections by more than 6 percent, but half of the CO-OPs did not meet or exceed their individual projections. As figure 7 shows, of the 11 CO-OPs that have ceased operations, 6 did not meet their individual enrollment projections, while 5 CO-OPs exceeded their projections. (See fig. 7.) Further, of the 11 CO-OPs that continued to operate as of January 4, 2016, 6 exceeded their 2015 enrollment projections by June 30, 2015. (See fig. 8.) Our analysis, however, also found that 4 CO-OPs had not yet reached a program benchmark of enrolling at least 25,000 members. According to CMS officials, exceeding this benchmark can be important for CO-OPs, because that number of enrollees should better allow a health insurance issuer to cover its fixed costs. CMS officials told us that they are monitoring the CO-OPs’ enrollment with attention to this benchmark. We provided a draft of this report to HHS for comment. In its written comments, which appear in appendix XV, HHS stated its commitment to CO-OP beneficiaries and taxpayers in managing the CO-OP program, noted the achievements of the CO-OP program to date, and described developments in the department’s oversight activities. In addition, HHS stated its goal to help facilitate the acquisition of additional capital or the development of other business relationships that could assist those CO-OPs that continue to operate in achieving their goals and described its efforts to support them. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XVI. The Centers for Medicare & Medicaid Services awarded consumer operated and oriented plan (CO-OP) program loans totaling more than $2.4 billion, of which about $358 million was awarded for start-up loans and about $2.1 billion was awarded for solvency loans. Table 4 provides the total amounts awarded to each of the 23 CO-OPs established with funds disbursed under the CO-OP program loans. As of January 4, 2016, 11 CO-OPs continued to operate while, 12 CO-OPs had ceased operations. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Connecticut decreased from 2014 to 2015, but increased from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $33, and the average increase from 2015 to 2016 was about $32. (See table 5.) For 2015, the CO-OP in Connecticut offered catastrophic, bronze, silver, and gold health plans in each of the state’s eight rating areas, but did not offer a platinum health plan. Figure 9 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for health plans offered by the CO-OP in Connecticut were generally among the most expensive premiums for catastrophic health plans. For gold health plans, the CO-OP’s premiums were among the least expensive or in the middle. The CO-OP’s premiums for bronze and silver health plans were among the least expensive premiums in some rating areas, while ranging from the middle to among the most expensive premiums in others. The consumer operated and oriented plans (CO-OP) from Montana offered health plans on the Idaho health insurance exchange for the first time in 2015. The state-wide average monthly premium for CO-OP silver health plans for 30-year-old individuals increased from 2015 to 2016. Specifically, the average increase was about $57. (See table 6.) For 2015, the CO-OP in Idaho offered catastrophic, bronze, silver, and gold health plans in each of the state’s seven rating areas, but offered platinum health plans in only three. Figure 10 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank-ordering all plans in each rating area. The premiums for health plans offered by the CO-OP in Idaho were generally in the middle with premiums in some rating areas ranging from the least expensive to the middle. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Illinois decreased from 2014 to 2015, but increased from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $80, and the average increase from 2015 to 2016 was about $61. (See table 7.) For 2015, the CO-OP in Illinois offered bronze, silver, and gold health plans in each of the state’s 13 rating areas. The CO-OP offered platinum health plans in three rating areas, but did not offer any catastrophic health plans. Figure 11 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank-ordering all plans in each rating area. The premiums for health plans offered by the CO-OP in Illinois tended to be among the least expensive or in the middle. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Maine increased from 2014 to 2015, but decreased slightly from 2015 to 2016. Specifically, the average increase from 2014 to 2015 was about $8, and the average decrease from 2015 to 2016 was about $1. (See table 8.) For 2015, the CO-OP in Maine offered catastrophic, bronze, silver, and gold health plans in each of the state’s four rating areas, but did not offer a platinum health plan. Figure 12 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for catastrophic, silver, and bronze health plans offered by the CO-OP in Maine were among the most expensive in some rating areas, the least expensive in some, and in the middle in others. Premiums for gold health plans were among the least expensive. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Maryland decreased from 2014 to 2015, but increased from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $33, and the average increase from 2015 to 2016 was about $18. (See table 9.) For 2015, the CO-OP in Maryland offered bronze, silver, gold, and platinum health plans in each of the state’s four rating areas, but did not offer catastrophic health plans. Figure 13 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for health plans offered by the CO-OP in Maryland were generally in the middle. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Massachusetts decreased from 2014 to 2015, but increased from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $19, and the average increase from 2015 to 2016 was about $7. (See table 10.) For 2015, the CO-OP in Massachusetts offered plans in all tiers in five of the state’s seven rating areas. Figure 14 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for health plans offered by the CO-OP in Massachusetts were among the least expensive across all tiers and rating areas. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Montana decreased from 2014 to 2015, but increased from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $17, and the average increase from 2015 to 2016 was about $75. (See table 11.) For 2015, the CO-OP in Montana offered plans in all tiers in each of the state’s four rating areas. Figure 15 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The CO-OP premiums for catastrophic health plans offered by the CO-OP in Montana were generally in the middle. The CO-OP premiums were among the least expensive premiums or in the middle for silver, gold, and platinum plans. CO-OP premiums for bronze plans ranged from among the least to most expensive. The consumer operated and oriented plans (CO-OP) from Maine and Massachusetts both offered health plans on the New Hampshire health insurance exchange for the first time in 2015. The state-wide average premiums for CO-OP silver health plans for 30-year-old individuals increased from 2015 to 2016. Specifically, the average increase was about $33. (See table 12.) For 2015, CO-OPs in New Hampshire offered health plans in all tiers except for platinum in the state’s single rating area. Figure 16 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank-ordering all plans in the state’s single rating area. The premiums for health plans offered by the two CO-OPs in New Hampshire varied widely. CO-OP premiums for bronze, silver, and gold health plans ranged from the least to the most expensive. Premiums for catastrophic plans ranged from the middle to the most expensive. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in New Jersey decreased from 2014 to 2015, but increased from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $71 and the average increase from 2015 to 2016 was about $54. (See table 13.) For 2015, the CO-OP in New Jersey offered a health plan in all tiers in the state’s single rating area. Figure 17 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in the state’s single rating area. The premiums for the health plans offered by the CO-OP in New Jersey were among the less expensive premiums for bronze and silver health plans and in the middle for catastrophic plans. CO-OP premiums for gold and platinum health plans ranged from among the least to the most expensive. Appendix X: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in New Jersey Appendix X: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in New Jersey Rating area 1 includes Atlantic, Bergen, Burlington, Camden, Cape May, Cumberland, Essex, Gloucester, Hudson, Hunterdon, Mercer, Middlesex, Monmouth, Morris, Ocean, Passaic, Salem, Somerset, Sussex, Union, and Warren counties. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in New Mexico decreased from 2014 to 2015 and decreased again from 2015 to 2016. Specifically, the average decrease from 2014 to 2015 was about $9 and the average decrease from 2015 to 2016 was about $7. (See table 14.) For 2015, the CO-OP in New Mexico offered catastrophic, bronze, silver, and gold health plans in each of the state’s five rating areas, but did not offer a platinum health plan. Figure 18 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for silver and gold health plans offered by the CO-OP in New Mexico varied widely, ranging from among the least to the most expensive premiums. CO-OP premiums were often among the less expensive premiums for bronze health plans, and were generally in the middle for catastrophic plans. Appendix XI: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in New Mexico Rating area 1 includes Bernalillo, Sandoval, Torrance, and Valencia counties. Appendix XI: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in New Mexico Rating area 2 includes San Juan County. Rating area 3 includes Don Ana County. Rating area 4 includes Santa Fe County. Rating area 5 includes Catron, Chaves, Cibola, Colfax, Curry, DeBaca, Eddy, Grant, Guadalupe, Harding, Hidalgo, Lea, Lincoln, Los Alamos, Luna, McKinley, Mora, Otero, Quay, Rio Arriba, Roosevelt, San Miguel, Sierra, Socorro, Taos, and Union counties. The consumer operated and oriented plan (CO-OP) in Ohio offered health plans on the state’s exchange for the first time in 2015. The state-wide average monthly premium for CO-OP silver health plans for 30-year-old individuals increased from 2015 to 2016. Specifically, the average increase from 2015 to 2016 was about $43. (See table 15.) For 2015, the CO-OP in Ohio offered catastrophic, bronze, silver, and gold health plans in each of the state’s 17 rating areas, but did not offer a platinum health plan. Figure 19 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for health plans offered by the CO-OP in Ohio were often in the middle or among the most expensive premiums. Appendix XII: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in Ohio Appendix XII: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in Ohio Rating area 1 includes Defiance, Fulton, Henry, Lucas, Williams, and Wood counties. Rating area 2 includes Allen, Auglaize, Hancock, Hardin, Mercer, Paulding, Putnam, and Van Wert counties. Appendix XII: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in Ohio Rating area 3 includes Champaign, Clark, Darke, Greene, Miami, Montgomery, Preble, and Shelby counties. Rating area 4 includes Butler, Hamilton, and Warren counties. Rating area 5 includes Adams, Brown, Clermont, Clinton, and Highland counties. Rating area 6 includes Erie, Huron, Ottawa, Sandusky, Seneca, and Wyandot counties. Rating area 7 includes Crawford and Richland counties. Rating area 8 includes Marion and Morrow counties. Rating area 9 includes Delaware, Fairfield, Fayette, Franklin, Knox, Licking, Logan, Madison, Pickaway, and Union counties. Rating area 10 includes Galia, Jackson, Lawrence, Pike, Ross, Scioto, and Vinton counties. Rating area 11 includes Ashtabula, Cuyahoga, Geauga, Lake, and Lorain counties. Rating area 12 includes Ashland, Medina, Portage, and Summit counties. Rating area 13 includes Columbiana, Mahoning, and Trumbull counties. Rating area 14 includes Holmes and Wayne counties. Rating area 15 includes Carroll and Stark counties. Rating area 16 includes Belmont, Coshocton, Guernsey, Harrison, Jefferson, Monroe, Morgan, Muskingum, Noble, Perry, and Tuscarawas counties. Rating area 17 includes Athens, Hocking, Meigs, and Washington counties. The state-wide average monthly premium for the two consumer operated and oriented plans’ (CO-OP) silver health plans for 30-year-old individuals in Oregon increased from 2014 to 2015 and, for the one CO-OP that continued to operate in 2016, increased again from 2015 to 2016. Specifically, the average increase from 2014 to 2015 was about $1, and the average increase from 2015 to 2016 about $54. (See table 16.) For 2015, the CO-OP in Oregon that continued to operate as of January 4, 2016, offered catastrophic, bronze, silver, and gold health plans in each of the state’s seven rating areas, but offered no platinum health plans. Figure 20 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank-ordering all plans in each rating area. The premiums for bronze and silver health plans offered by the CO-OP varied widely, ranging from among the least to the most expensive premiums. The premiums for gold health plans tended to be in the middle or among the most expensive premiums, except in rating area 1. Appendix XIII: Premiums for the Consumer Operated and Oriented Plans Relative to Premiums for Other Health Plans in Oregon Rating area 1 includes Clackamas, Multnomah, Washington, and Yamhill counties. Rating area 2 includes Benton, Lane, and Linn counties. Rating area 3 includes Marion and Polk counties. Appendix XIII: Premiums for the Consumer Operated and Oriented Plans Relative to Premiums for Other Health Plans in Oregon Rating area 4 includes Deschutes, Klamath, and Lake counties. Rating area 5 includes Columbia, Coos, Curry, Lincoln, and Tillamook counties. Rating area 6 includes Crook, Gilliam, Grant, Harney, Hood River, Jefferson, Malheur, Morrow, Sherman, Umatilla, Union, Wallowa, Wasco, and Wheeler counties. Rating area 7 includes Douglas, Jackson, and Josephine counties. The state-wide average monthly premium for the consumer operated and oriented plan’s (CO-OP) silver health plans for 30-year-old individuals in Wisconsin increased from 2014 to 2015 and increased again from 2015 to 2016. Specifically, the average increase from 2014 to 2015 was about $19, and the average increase from 2015 to 2016 was about $25. (See table 17.) For 2015, the CO-OP in Wisconsin offered catastrophic, bronze, silver, and gold health plans in 6 of the state’s 16 rating areas, but did not offer a platinum health plan. Figure 21 shows the percentile range in which CO-OP monthly premiums for 30-year-old individuals fell after rank- ordering all plans in each rating area. The premiums for catastrophic, silver, and gold health plans offered by the CO-OP in Wisconsin varied widely, ranging from among the least to the most expensive. The premiums for bronze health plans tended to be among the least expensive premiums. Appendix XIV: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in Wisconsin Rating area 1 includes Milwaukee County. Appendix XIV: Premiums for the Consumer Operated and Oriented Plan Relative to Premiums for Other Health Plans in Wisconsin Rating area 2 includes Dane County. Rating area 3 includes Polk, Pierce, and St. Croix counties. Rating area 4 includes Chippewa, Dunn, Eau Claire, and Pepin counties. Rating area 5 includes Ashland, Bayfield, Burnett, Douglas, Sawyer, and Washburn counties. Rating area 6 includes Buffalo, Jackson La Crosse, Monroe, and Trempealeau counties. Rating area 7 includes Crawford, Grand, Iowa, LaFayette, and Vernon counties. Rating area 8 includes Clark, Price, Rusk, and Taylor counties. Rating area 9 includes Racine and Kenosha counties. Rating area 10 includes Lincoln, Marathon, Portage, and Rusk counties. Rating area 11 includes Calumet, Dodge, Fond du Lac, Sheboygan, and Winnebago counties. Rating area 12 includes Ozaukee, Washington, and Waukesha counties. Rating area 13 includes Florence, Forest, Iron, Langlade, Oneida, and Vilas counties. Rating area 14 includes Columbia, Green, Jefferson, Rock, and Walworth counties. Rating area 15 includes Adams, Green Lake, Juneau, Marquette, Richland, and Sauk counties. Rating area 16 includes Brown, Door, Kewaunee, Manitowoc, Menominee, Oconto, and Shawano counties. John E. Dicken, (202) 512-7114 or dickenj@gao.gov. In addition to the contact named above, Robert Copeland, Assistant Director; Kristen Joan Anderson; Sandra George; Giselle Hicks; Aaron Holling; and Drew Long made key contributions to this report.
The Patient Protection and Affordable Care Act established the CO-OP program and provided loans that helped create 23 CO-OPs to offer qualified health plans to individuals and small employers. While the program seeks to increase competition and improve accountability to members, questions have arisen about their long-term sustainability and their effects on health insurance markets, particularly as 12 CO-OPs ceased operations on or before January 1, 2016. In April 2015, GAO issued its first report examining the status of CO-OP premiums, enrollment, and program loans in 2014 ( GAO-15-304 ). As one CO-OP ceased operations in early 2015, GAO was asked to review the CO-OP program again. This report examines (1) how CMS monitors the CO-OPs' performance and sustainability; (2) how CO-OP premiums changed from 2014 to 2015, and in 2015, how they compared to premiums for other health plans; and (3) how CO-OP enrollment changed from 2014 to 2015, and in 2015, how it compared to projections. GAO analyzed 2014 and 2015 premium and enrollment data from CMS, states, and the National Association of Insurance Commissioners; and reviewed applicable regulations, policies, procedures, and documentation of CMS monitoring activities. GAO also interviewed CMS officials. In commenting on a draft of this report, the Department of Health and Human Services stated its commitment to CO-OP beneficiaries and taxpayers, and provided technical comments, which GAO incorporated as appropriate. The Centers for Medicare & Medicaid Services' (CMS) monitoring of the consumer governed, nonprofit health insurance issuers—known as consumer operated and oriented plans (CO-OPs)—evolved as the CO-OP program matured, and as 12 of the 23 CO-OPs ceased operations on or before January 1, 2016. CMS's initial monitoring activities, starting when it began to award CO-OP program loans in early 2012, focused on the CO-OPs' progress as start-up issuers and their compliance with program requirements. Since then, CMS refined and expanded its monitoring to evaluate CO-OP performance and sustainability. CMS officials use enrollment and financial data to identify CO-OPs for which actual performance differed substantially from what was expected. CMS officials also perform routine assessments of each CO-OP's risk in various areas, such as working capital and management. To evaluate and respond to financial or operational issues identified at CO-OPs, CMS formalized a framework that it called an escalation plan. Under this plan, CMS may require that a CO-OP take corrective actions or the agency may implement an enhanced oversight plan based on its evaluation of the issue. As of November 2015, CMS used its escalation plan to evaluate and respond to issues at 18 CO-OPs, including 9 of the CO-OPs that have ceased operations. CMS officials told GAO that they plan to work with states' departments of insurance to continue monitoring CO-OPs that have ceased operations to the extent possible in order to minimize any negative impact on members and, if possible, recover loans made through the program. GAO found that in 14 of the 20 states where CO-OPs offered health plans during both 2014 and 2015, the average CO-OP premiums for 30-year-old individuals purchasing silver health plans—the most commonly selected plan—were lower in 2015 than the average premiums for such plans in 2014. In the 23 states where CO-OPs offered health plans during 2015, the average premiums for all CO-OP health plans were lower than those for other issuers in more than 75 percent of rating areas—geographical areas established by states and used, in part, by issuers to set premium rates. Across the 23 states, average silver health plan premiums were lower for CO-OPs than other issuers in 31 percent to 100 percent of rating areas. In addition, GAO found that the combined enrollment for the 22 CO-OPs that offered health plans in 2015 was over 1 million as of June 30, 2015, more than double the enrollment of a year earlier. More than half of these members were in CO-OPs that ceased operations. GAO also found that the combined enrollment for all 22 CO-OPs in 2015 exceeded their projections for 2015 by more than 6 percent. Of the 11 CO-OPs that have ceased operations, 6 did not meet their individual enrollment projections for 2015. Among the 11 CO-OPs that continue to operate in 2016, 4 CO-OPs had not yet reached a program benchmark of enrolling at least 25,000 members. CMS officials told GAO that exceeding this benchmark represents a level of enrollment that should better allow an issuer to cover its fixed costs; CMS officials told GAO that they are monitoring the CO-OPs' enrollment with attention to this benchmark.
DOD has provided subsidized child care to military members and civilian military employees for decades. Today, DOD-subsidized child care is widely considered to be a high-quality model for the nation. A recent DOD report said that the Military Child Care Act of 1989, which created DOD’s current child care structure and was enacted in response to concerns at the time about quality and availability of services, focused on assuring high-quality services and expanding access through subsidies. DOD- subsidized care assists military families in balancing the competing demands of family life, accomplishing the DOD mission, and improving the financial health of military families. However, DOD-subsidized child care is not guaranteed to all who need it, and the availability of such care depends on demand and the services’ budgetary resources. DOD’s goal is to meet 80 percent of the demand for child care. Table 1 shows the primary DOD-subsidized child care programs that are available to families in all four services, although a few other service-specific programs exist. Most military families who receive child care assistance do so by using CDCs or other forms of on-installation care. Several additional subsidized child care programs have been adopted DOD- wide, such as programs specifically for injured servicemembers’ families, and respiterather than regularly scheduled care. In addition, services offer several and hourly care—both of which are intended to offer sporadic, service-specific subsidized child care programs. For example, Army Child Care in Your Neighborhood, available only at specified installations, aims to increase the availability of eligible community-based child care providers and Air Force’s Extended Duty Care offers child care during nontraditional hours to support servicemembers working extended or additional shifts to support the military mission. The services acknowledge that some families may also use youth development programs—programs outside of School Age Care, such as recreation programs—as child care, although these are not required to meet the DOD’s standards for child care and they are not intended to be used as such. The Office of the Secretary of Defense (OSD) establishes eligibility criteria for subsidized child care and provides oversight and guidance to the services, which each administer their own child care programs. For example, OSD defines the following groups as eligible for military child care programs: active duty military personnel, DOD civilian personnel, reservists on active duty or during inactive duty personnel training, and DOD contractors. fiscal year 2010, there were approximately 1 million servicemembers with 1.8 million children ages 13 and under, according to our analysis of data from the Defense Manpower Data Center. According to DOD, its child care system is serving about 200,000 children from birth to age 12, and NACCRRA records indicate that in fiscal year 2010, about 25,000 of these children were served in subsidized off-installation care. OSD specifies that first priority be given to active duty military and DOD civilian personnel who are either single parents or whose spouse is employed on a full-time basis outside the home or is a military member on active duty. However, OSD officials told us that they are in the process of revising this policy. The revision under consideration broadens the range of those in first priority status, adding surviving spouses of servicemembers deceased while on active duty, among other groups. DOD Instruction 6060.2(4.3). Reservists include members of the Reserves and the National Guard. OSD also sets standards for provider eligibility for DOD’s off-installation child care subsidies. DOD requires that providers under Military Child Care in Your Neighborhood, intended for longer-term care periods, be nationally accredited, to help ensure they are comparable in quality to DOD’s Child Development Programs, such as CDCs. According to DOD officials, child care can be considered accredited under a number of different national accreditation and state child care quality programs, which help ensure that child care providers meet quality standards. Operation Military Child Care is intended for families of deployed servicemembers, and DOD requires that, at a minimum, providers be licensed and inspected annually. OSD sets allowable ranges for the fees that families pay for on-installation child care at CDCs, within which the services must set their fees. In contrast to private providers, who generally set fees based on a child’s age, OSD sets fee ranges based on total family income. OSD sets two fee ranges—one for standard-cost areas, and one for high-cost areas, or areas with high market rates for child care. Installations in high-cost areas must pay higher salaries to retain qualified child care staff, and are allowed to charge higher fees to help cover these additional personnel costs. The services have some flexibility in how they set their fees for on-installation child care within the ranges set by OSD. For on-installation care, a family’s cost is the fee that the service or installation sets minus any fee reductions, such as discounts for multiple children in care. Other factors affect family-level costs, such as family size and the number of hours that children are in care. The subsidies services offer providers for off-installation child care are intended to provide benefits comparable to those that families would receive for on-installation care. As with fees for on-installation care, however, the services have the ability to determine the extent to which they subsidize the cost of off-installation care. The services contract with NACCRRA to administer these subsidies, which NACCRRA pays directly to DOD-approved child care providers. A family’s cost for off-installation care is the portion of their provider’s fee not covered by the subsidy. As a result of services’ policies, the per-child monthly cost of on- installation care at a CDC for families within the same income category varied by as much as $230 in school year 2010, depending on their service and installation (see fig. 1). However, the per-child monthly costs for most families in the same income category varied within a smaller range. For example, the per-child monthly cost for on-installation care for a family with an annual income of $50,000 could have ranged from $335 to $518 in school year 2010; however, for families in this income category at most military installations with CDCs, the per-child monthly cost was within the OSD standard fee range of $335 and $413. The services have different policies for setting fees within the ranges set by OSD. In school year 2010, the Air Force, Army, and Marine Corps all allowed installation commanders to set fees for on-installation child care, based on factors such as local market rates for child care. Most installations set their fees within the OSD standard fee ranges. Installations in high-cost areas, however, could set their fees using the OSD high-cost fee range, or by increasing the standard fee by the percentage of their cost-of-living allowance, which sometimes resulted in fees that were above the OSD high-cost range. In contrast to the other services, the Navy set one fee per income category for all of their standard-cost installations and another fee per category for high-cost installations. In addition, the services have the discretion, within DOD- prescribed policy, to set their own policies regarding the fee reductions they offer to families, which generally apply only to on-installation care. For example, services may offer fee reductions for families with multiple children in care, families with a deployed servicemember, families with injured servicemembers, and families experiencing financial hardship. In school year 2010, families using off-installation care in the Air Force and Navy, which capped the monthly amount of subsidy a family could receive at $200 per child, had higher average monthly child care costs than did families in the Army and Marine Corps, which did not have fixed subsidy caps. Across all services, on average, military families paid about $556 per month for DOD-subsidized off-installation care (see table 2). However, the average monthly costs for Air Force and Navy families were $787 and $734, respectively, compared to $501 and $556, respectively, for Army and Marine Corps families. In addition to fixed subsidy caps, several other factors affected families’ costs for off- installation care, including fee rates charged by private providers. Air Force and Navy families had higher average costs by other measures as well. For instance, families in these services using off-installation care paid more, on average, than the estimated amount they would have paid for on-installation care (by 11 percent and 16 percent, respectively). In addition, on average, Navy families’ costs for off-installation care were 12 percent of their family income, while the average Army family’s costs were 8 percent of their family income. Air Force and Navy families also paid a higher percentage of their private providers’ fees (the fee before being reduced by the subsidy), on average, than Army or Marine Corps families. Families’ costs for off-installation care are affected not only by subsidy caps, but also by the fees services charge for on-installation care. Generally, the subsidy amount is the fee charged by the private provider minus the estimated amount that a family would have paid for on- installation care at a CDC. For example, if an off-installation provider charges $1,000 per month, and a family would have paid $600 per month on installation, the subsidy amount would be $400 (if there is no subsidy cap), and the family would pay the same amount they would have paid on installation: $600. However, in school year 2010, the Air Force and Navy set subsidy caps, or limits, on the per-child subsidy for off-installation care in order to offer benefits to more families. As a result, some families in these services paid more for off-installation care than they would have paid on installation. In the example above, if the family was in the Air Force or Navy, which both had fixed subsidy caps of $200 per month in school year 2010, the family would have paid $800 per month (the provider rate of $1,000 minus the subsidy of $200), which is $200 more than they would have paid on installation. The Army also used subsidy caps in school year 2010. However, in contrast with the Air Force and Navy, the Army’s intention was that a family’s subsidy would only be capped if they used private providers who charged rates above what the Army considered reasonable for high-quality care in their local market. The Army caps varied from $153 to $2,576, depending on a number of family factors, such as total family income. The vast majority of these caps were above $200, and over half were above $500. Thus, Army families were likely less affected by their service’s caps than were Navy and Air Force families, as suggested by the lower average costs of Army families in our sample. In addition to fixed caps, other factors also affect families’ costs for off- installation care, including minimum subsidies, family factors, and provider rates. Three of the four services offered minimum subsidies in school year 2010, which families received even if their off-installation providers charged less than they would have paid on installation (see table 3). Army officials said that they offered minimum subsidies to encourage families living off installation to participate in DOD-subsidized child care programs. Other factors that affect off-installation costs include family income, which is a factor in the services’ subsidy calculations, and other family factors such as the number and ages of children in care and the amount of time that they are in care, which may affect how much a family pays in fees to a given provider. The fee rates charged by private providers, which are influenced by child care supply and demand as well as the geographic location of the local community, affect costs for some, but not all families. In the absence of subsidy caps, provider rates do not affect costs, since the family’s subsidy covers the full difference between their provider rate and estimated on-base fee. For families in services with subsidy caps, however, the subsidy may not cover the full difference, in which case families with higher provider rates will have higher costs. Recent and planned changes to OSD and the Army’s fee policies will continue to reduce variation in the amount families in the same income category pay for on-installation care. In school year 2011, OSD revised the fee ranges for the first time since school year 2005 to account for inflation and increases in servicemembers’ incomes and to achieve a more equitable distribution of fees for military families. Specifically, OSD divided the top income category into four categories, increased the maximum income for each category, and increased both the minimum and maximum fees for all categories except Category I (see table 4). Under the new fee structure, OSD set a single fee per income category for high-cost installations, which all high-cost installations are now required to charge. The impact of these revisions on military families’ costs for on-installation care varied depending on families’ service, installation, and income. In general, however, the new OSD fee policy reduced the variation in the per-child monthly cost for families in the same income category using on- installation care among and within services. For example, while in school year 2010 the per-child monthly cost for on-installation care for a family with an annual income of $50,000 could have varied from $335 to $518 (see fig. 1), in school year 2011 the per-child monthly cost for families with the same income could have varied from $358 to $478 (see app. II, fig. 4 for the full range of fees charged by the services in school year 2011). Costs for families with this income at most installations were within the OSD standard fee range of $395 to $456 (see table 4). OSD officials said that they are working with the services to transition in the next 3 to 5 years toward a DOD-wide fee policy like the one currently used by the Navy, with one fee per income category for standard-cost installations, and another fee per category for high-cost installations. According to these officials, the new fee policy implemented in school year 2011 is the first step in this transition. OSD’s changes to its fee policy and additional changes the services have made to fee policies for off-installation care affect costs for some families using off-installation care, but the extent of these effects is largely unknown. The new OSD fee ranges affect costs for families using off- installation care, as well as on-installation care, since the on-installation fee rates are used to calculate subsidies for off-installation care. In addition, all of the services have recently made changes to their off- installation fee policies that make these policies more consistent in some, but not all, respects. See table 5 for a summary of these changes. Subsidy minimums are one area where the services’ subsidy policies for off-installation care are not consistent, since Navy and Air Force families receive a minimum of $10 per child per month, all Marine Corps families eligible for full-time care receive $250 per child per month, and Army families no longer receive a minimum subsidy. Given that the average costs for families in our sample in services with $200 subsidy caps were higher than those for other families, the average costs of Marine Corps families using off-installation care are likely to rise with the implementation of the Marine Corps’ $250 subsidy cap. Some Marine Corps families, however, will see a decrease in their costs if they previously received a subsidy of less than $250 per child per month. In general, the effects of these subsidy policy changes will vary by family. In particular, the effects will vary for families in services with subsidy caps, because costs for these families are driven partly by private provider fees, which vary considerably regardless of the services’ policies. Although DOD provides information about subsidized child care programs through a number of sources, DOD officials and military parents cited limited awareness of these programs as a key barrier to their use. DOD uses many outreach methods, such as deployment briefings and other events, brochures and ads, e-mails to servicemembers, staff assigned to provide child care information, and Internet avenues, including websites and social media. Figure 2 shows common sources of DOD child care information. DOD officials stated that while military families living on or near an installation likely know about child care available through CDCs and Family Child Care, they may not know about other on-installation child care programs, such as respite care. Furthermore, both those living on or near an installation and those living far from an installation may not know about off-installation programs. For example, many servicemembers we spoke with did not know about DOD-subsidized off-installation child care. In addition, those who knew about the DOD-subsidized off-installation programs had not always learned about them when they needed child care. For example, two servicemembers we spoke with said that they had learned about off-installation programs, but only through the community- based provider they were using without benefit of DOD subsidies. One of these servicemembers said that he had used a community-based provider for a number of months before the provider told him about the DOD subsidy program, and this was only when he notified the provider that he no longer planned to use its services because he could not afford the child care fees charged. Also, families that learn about DOD- subsidized off-installation child care programs may not be aware of the eligibility requirements of the programs. For example, officials at the National Military Families Association told us that many military families that heard about DOD off-installation subsidized child care believed that the programs are needs-based, and thus assumed that they were not eligible for them because their income was too high, even though the programs are not limited to low-income families. Similarly, two military spouses we spoke with incorrectly assumed that only those with low incomes were eligible for such subsidies. DOD faces a number of challenges to educating military families about DOD-subsidized child care, particularly off-installation care. These challenges include the large quantity of information servicemembers receive during briefings, the timing of information provided, fewer opportunities off installation to educate servicemembers about DOD- subsidized programs for those geographically isolated from an installation, and fragmented child care application procedures. DOD and the services have taken a number of steps to address these challenges. Quantity of information received. DOD officials and an organization representing military families told us that information on DOD- subsidized child care programs is frequently provided at pre- deployment or other briefings, but because these briefings are often lengthy and cover multiple topics, servicemembers often do not retain information about child care. In 2007, we reported that DOD had a similar concern regarding briefings provided at mobilization sites and demobilization sites, which DOD considers to be primary educational tools. DOD officials said then that these briefings are often so full of critical information that it is difficult for reservists to absorb all of the details of its health care insurance program. In fact, several servicemembers we met with noted that they tune out or become overwhelmed by long briefings and therefore do not retain much of the information, such as on child care. DOD officials recognize this concern and said that they try to ensure that military families learn about DOD-subsidized child care by providing them with many opportunities, beyond briefings, to learn about these programs. For example, because DOD considers command unit leadership to be key to ensuring readiness, including supporting spouses and families, DOD has taken steps to help units provide child care information to their servicemembers and families through unit contacts. Officials from all four services told us that they provide information on these programs to these contacts, including the unit commander and other unit staff designated to provide this type of information to families. Two services—the Army and Marine Corps—created professional positions within military units that are responsible for supporting family readiness by providing assistance to families, such as child care information. These positions also support formal volunteer organizations tasked with communicating information and providing education and support to military families. Although the Air Force and Navy have not designated such professionals within units, they have family services professionals outside of units who are assigned to serve servicemembers in units. For example, the Air Force has community readiness consultants and family child care coordinators assigned to serve specific units and provide the same type of support to families as do Army and Marine Corps unit-based professionals. Both Air Force and Navy also support similar volunteer organizations and efforts, such as Navy’s Family Ombudsman Program. Many servicemembers we spoke with stated that these family readiness professionals and volunteer organizations are helpful in learning about DOD-subsidized child care programs, while others noted that the level of assistance varies. Each of the four services has taken additional steps to increase outreach for these programs. For example, the Marine Corps prepared a fact sheet on DOD-subsidized off-installation child care programs, which includes answers to frequently asked questions including updated information on program policy changes. The Navy hired an outreach coordinator for installation and community-based child care programs. Also, the Army and Marine Corps provided resource and referral staff a “script” to ensure that staff members provide consistent, specific information on DOD-subsidized off- installation child care programs. The Air Force developed a marketing strategy including materials such as pamphlets and an implementation guide for program staff, intended to provide information on family child care programs to servicemembers. Timing of information provided. Officials from DOD and groups representing military families told us that information on DOD- subsidized child care is more likely to be absorbed if it is provided when military families need it. Some servicemembers we spoke with mentioned that they ignored information on DOD-subsidized child care programs when they did not need child care, but were interested in such information when they later needed this service, such as when they became parents and had to return to work. One servicemember said briefings targeted to parents-to-be or those with children of similar ages would help overcome the problem of not getting information at the right time. For example, without targeted briefings about child care, not all parents-to-be learn about the need for getting on waiting lists for on-installation care. On one installation we visited, not all of the military mothers we spoke with had been advised that they needed to get on the CDC waiting list, which was about 9 months, as soon as they learned they were pregnant. Those who were not alerted said that they had to take leave to care for their infants until they could find child care. Services recognize the need for targeted information on child care and have implemented education programs, such as those for expectant and new military parents, and mandatory physical training for postpartum servicemembers, both of which offer the opportunity to educate participants about the need to get on CDC wait lists. Services target child care information to military families in other ways, as well. For example, the Air Force offers a sponsorship program aimed at facilitating permanent change of station moves. Under this program, Air Force servicemembers trained as sponsors welcome and assist colleagues and their families who are new to an installation by providing information on local services, which can include information on DOD-subsidized child care options. One Air Force servicemember we spoke with had a sponsor that had helped her find housing and child care when she moved to a new installation. Other examples include the Navy and Air Force’s new programs to market DOD- subsidized child care programs to military families, such as to reservists who have children and have recently deployed. Fewer opportunities off installation to educate servicemembers. DOD estimates that two-thirds of those stationed in the United States do not live on an installation and many of these families live long distances from an installation. However, because DOD child care programs have traditionally been focused on installations, more information about DOD-subsidized child care, including daily exposure to sources of child care information, is available to those living on or near an installation. For example, installation outreach can include a walk-in information and referral center and ongoing child care publicity, such as on marquees on the installation promoting child care programs. Also, military families that are geographically isolated from installations are likely isolated from military peers that DOD officials and several parents we spoke with cited as a source of child care information. As a result, families of servicemembers who do not live or work on an installation, such as recruiters and Guard and Reserve members, may be less aware of DOD-subsidized child care programs, including those that become available when they deploy. For example, results from a 2010 Army Guard survey showed that many Army Guard members were unaware of DOD-subsidized child care, while most military families living on or near an installation are likely knowledgeable about the availability of on-installation care. Limited exposure to on-installation information about DOD-subsidized child care may affect Guard and Reserve families to a greater extent than active duty families that live remote from an installation. These military families may often identify with the civilian, rather than the military world, and thus may be less likely to look to the military as a source of support. One reservist told us that reservists generally assume that if they do not live near a military installation, military services will not be available. Thus, many reservists might not even think to ask about DOD-subsidized child care when they are activated. DOD and NACCRRA have both taken steps to address the need to provide more opportunities to educate servicemembers about DOD- subsidized off-installation child care. DOD implemented the Joint Family Support Assistance Program to supplement and coordinate family services provided by the services, including child care, target military families geographically dispersed from a military installation, and collaborate with community organizations to enhance the availability of high-quality family services. The services have also taken such steps. For example, the Air Force implemented an Air Force Reserve web page on the Family Members programs, which includes information on child care available to reservists. Marine Corps officials stated that in light of the subsidy cap they implemented in school year 2011 that will make subsidies available to more Marine Corps families, they have taken additional steps to contact reservists to educate them about off-installation program information, including program changes and how to obtain access to programs. In addition to hiring an outreach coordinator for installation and community-based child care programs, the Navy is developing marketing and communication strategies and webinar training specific to the Navy Reserves in order to better educate reservists about DOD-subsidized child care programs. The Army makes phone calls to families of deployed reservists to ask what services they need, including child care, and provides information and contacts for DOD-offered services. Also, because Reserve and Guard members may turn to civilian sources of assistance, NACCRRA officials stated that they asked their members to inform those who identify themselves as military families about DOD off-installation child care subsidy programs. In addition to learning about DOD-subsidized child care, obtaining information about applying for this care has also been a challenge families face, because servicemembers must apply for on-installation child care at different places than for off-installation child care. Also, for off-installation programs there are a number of eligibility requirements for the military family and standards for the community-based provider that differ depending upon the program. Generally, in order to apply for on- installation child care, including CDCs and Family Child Care, parents must contact the on-installation child care resource and referral office. However, if on-installation care programs have waiting lists and the family needs child care immediately, they must contact another entity, generally NACCRRA, if they choose to pursue DOD-subsidized off-installation care. For the most part, this process is separate from the installation’s resource and referral office and the installation generally does not follow up on each family’s success in finding off-installation care. Although NACCRRA assists families in finding an eligible community-based provider, if available, and applying for DOD-subsidized care, some servicemembers’ spouses and Guard officials found applying for the programs difficult. For example, several servicemembers’ spouses stated that the website did not provide clear steps on how to apply for DOD-subsidized child care. Further, several regional Guard officials stated that applying for off- installation DOD-subsidized child care is complex because requirements vary among programs, making it difficult to determine the programs for which a military family may be eligible. Also, they noted that providers that meet DOD’s requirements, which vary by program, are often not available, especially for programs with more stringent standards and in areas far from military installations. Additionally, several military parents we spoke with said that because there is no one place to find child care options available to them, particularly for off-installation child care, they had to research these options themselves in order to find alternatives to on-installation care. DOD is developing a central system intended to enable eligible military families worldwide, regardless of their service branch, to request military Child and Youth Program services that meet individual child and family needs. DOD officials told us that the system is aimed at helping educate military families about DOD-subsidized child care by identifying programs the family is eligible for based on information they enter into the system. Such a system may help alleviate the problem of unclear information on how to apply for programs and difficulties determining eligibility. DOD intends to market the system DOD-wide to servicemembers once it is fully implemented. The agency is in the process of contracting for the development of a marketing plan which will include assessing marketing needs and a strategy to market the system to users, in coordination with the services, among other things. DOD plans to pilot the system in the spring of 2012, and to begin full implementation of the system in the late summer or fall of 2012. In response to limited availability of on-installation child care and eligible off-installation providers, DOD and the services are increasing capacity at on-installation facilities and in the community as part of their commitment to family readiness. According to DOD officials, the increased demand, due to high deployments and increased operational tempo, puts pressure on the services, the Army and Marine Corps in particular. DOD officials stated that CDC waiting lists are common and DOD officials told us that Family Child Care home capacity is not increasing. As a result, many military families may not be able to obtain on-installation care when they need it. This is particularly an issue for families with children under the age of 3, especially infants. To meet this demand, DOD is increasing on- installation child care capacity by constructing new CDCs that it expects will result in meeting 80 percent of the estimated demand for military child care by 2012. DOD anticipates that construction projects approved in fiscal years 2008 through 2010 will add over 21,000 additional child care spaces. Military families that cannot obtain on-installation care due to wait lists and those that are geographically isolated from an installation may be eligible for DOD-subsidized off-installation care, but community-based providers who meet DOD’s quality standards are in short supply. Figure 3 shows the circumstances under which servicemembers may be eligible for DOD-subsidized off-installation care. Providers under Operation Military Child Care must be, at a minimum, licensed and annually inspected by their states, but according to NACCRRA not all states have requirements to regularly inspect licensed child care providers. Standards for providers under Military Child Care in Your Neighborhood are even more stringent. Under this program DOD requires eligible providers to be nationally accredited, but relatively few child care providers in the United States are accredited. According to a NACCRRA review,and 1 percent of family child care homes in the United States were nationally accredited. Additionally, the percent of child care centers with national accreditation varied from 2 to 47 percent among states. in 2008 only about 10 percent of child care centers DOD is taking steps to increase the number of community-based providers eligible for DOD subsidies. In the near term, Army Child Care in Your Neighborhood and Army School-Age Programs in Your Neighborhood subsidize nonaccredited providers who are participating in an Army quality improvement program. To help increase providers participating in these and other off-installation programs the Army established a full-time position to coordinate and manage community- based child care at selected installations, such as Joint Base Lewis- McChord in Washington. In addition, DOD is making more community- based providers available to military families who qualify for Military Child Care in Your Neighborhood by allowing the services to waive the accreditation requirement if it is determined that no accredited provider is available to the applicant. Further, according to NACCRRA it obtained agreements from all but three states to inspect licensed providers annually on a case-by-case basis so that military parents using these providers can receive subsidized care. Even with these efforts, DOD officials told us that some military families potentially eligible for DOD- subsidized child care assistance, including many who pay for child care, do not use subsidized care because they are not willing to switch to an eligible provider, or because they have no providers nearby who meet DOD’s standards. In the long term, DOD is piloting a 13-state initiative working with other federal agencies and state officials to increase the quality of child care programs by improving state-level child care oversight and licensing practices. The pilot’s intended purpose is to increase the number of providers who meet DOD quality standards. DOD selected these states, in part, because they have large military populations. The flexibility DOD has given the services to set their own child care fee policies, including subsidy caps for off-installation care, allows the services to adjust fees and subsidy amounts to meet their budgetary needs and respond to local cost-of-living variations. However, this flexibility has also contributed to differences in the out-of-pocket costs paid by families with similar incomes, both among and within services. DOD’s efforts to reduce its on-installation fee ranges for families with similar incomes and its plan to require all services to charge one fee per income category in a few years may help provide increased financial consistency for families moving among installations. It is difficult to determine the degree to which policy changes, such as those related to subsidy caps, will affect costs for off-installation care for individual families, because these costs are driven to some extent by private provider fees, which vary considerably regardless of the services’ policies. However, it is likely that families in the three services with subsidy caps will, on average, have higher costs than families in the Army, which does not. Thus, DOD and the services face a policy trade-off in determining the extent to which they will shoulder child care costs for military families who cannot obtain on-installation care. For instance, differences among the services in families’ costs for off-installation care could be minimized if all services offered subsidies that made up the full difference between a family’s private provider rate and what they would have paid for on-installation care, with no subsidy caps. However, such a policy change would require increased spending on child care for most of the services, likely requiring them to divert budgetary resources from other family programs to provide these higher subsidy amounts. In addition, eliminating the caps could require greater oversight from the services to ensure that providers did not raise their fee rates in response to subsidy increases. On the other hand, some services use subsidy caps to avoid having to limit the number of families who can benefit from subsidies for off-installation care, so that all eligible families can have at least some of their child care costs covered. As DOD and the services move forward with their new fee policies, balancing these competing priorities will be critical to supporting military families as they serve their country, while also using resources carefully in an austere fiscal environment. DOD has taken some important steps to make military families aware of DOD-subsidized child care programs, particularly off-installation subsidies, and to make eligible child care available. Additional steps that DOD is taking, such as waiving accreditation requirements and piloting a 13-state initiative designed to help states increase the quality of private providers are important in helping families living off installation obtain safe, reliable child care. As DOD increases the number of eligible child care providers for off-installation programs and moves toward centralizing access to DOD-subsidized child care programs through its planned agencywide system for requesting both on- and off-installation care, outreach will need to keep pace, with particular attention to families who live off installation. DOD anticipates that its proposed marketing plan will help better ensure that servicemembers are aware of this system once it is fully implemented. Such a plan also provides an opportunity for services to identify and share communication strategies with the most potential. How well DOD executes both its marketing plan and communication strategies will be crucial to ensuring that all families learn about DOD-subsidized child care, including families that are geographically isolated from installations and have few opportunities to benefit from ongoing contact with installation resources, such as walk-in information centers, on-installation child care publicity, and contact with other military families who could tell them about military child care options. Otherwise, the barriers families face in learning about and accessing DOD-subsidized child care programs will likely persist. We provided a copy of this draft report to DOD for comment and review. DOD provided written comments, which are reproduced in appendix IV of this report, and technical comments, which we incorporated as appropriate. In its comments, DOD said that, in general, the report correctly addresses the issues of providing fee assistance to military members and assisting with access to child care. DOD also provided clarification about the following issues: variations in on-installation child care costs, fee caps, changes in policies to reduce out-of pocket costs, and family support assistance. DOD also suggested caution in drawing conclusions based on our random sample. Regarding variations in per-child, on-installation child care costs, DOD stated that in practice the fee variations are smaller than those we reported due to the limited number of programs utilizing the high-cost fee option. We believe that the report is very specific about not only the actual fee ranges but also the percentage of installations that are charging these fees. As stated in the report, we found that 64 percent of military installations with CDCs charged fees within OSD’s standard fee range in school year 2010, which means that over a third of the installations charged fees within OSD’s high-cost range. The high-cost exceptions to this were a few installations—a total of six and, again, specifically noted in the report—that had very high cost-of-living allowances and charged fees that were above OSD’s high-cost range. We recognize that in school year 2011 the number of military installations with CDCs that charged fees within OSD’s high-cost range declined to about 11 percent and we noted this in our final report in response to DOD’s comment. DOD commented that eliminating the caps that three of the services have placed on off-installation care fee assistance may not require increased spending on child care for those services. DOD provided an Army analysis that concludes that the average amount of child care fee assistance the Army paid per child in recent fiscal years was less than the capped rate currently paid by the other services. However, the Army’s analysis does not take into consideration the effect of removing subsidy caps for those services that have such caps. Based on our analysis, over 60 percent of Air Force and Navy families receiving subsidies for off- installation care were affected by these services’ subsidy caps in school year 2010. Thus, if the Air Force and Navy were to eliminate their subsidy caps, they would incur paying higher subsidy amounts to over 60 percent of families using this type of care, which would increase their total spending on child care. DOD stated that in addition to the recent and planned changes to DOD and the Army’s fee policies that will likely reduce the differences among the services, the Air Force also implemented fee policy changes reducing out-of-pocket expenses for families. The Air Force change eliminates additional fees if a child is in care more than 10 hours per day. However, our analysis of the child care fees charged by the services did not include any additional fees they may have charged beyond the base fees for regularly scheduled care, such as fees for care beyond 10 hours per day. Thus, eliminating those additional fees would not change the fee differences among the services that we include in this report. DOD also commented that in regard to the family support assistance provided by professional positions, it believes that the Air Force and Navy provide the same level of service as the Army and Marine Corps, but in a different manner. We did not assess the level of family support assistance provided by each service. Instead, we reported the various ways these services provide such assistance through professionals within and outside the units and through the services’ support of volunteer organizations. That said, we recognize the importance of the Navy’s Family Ombudsman Program and have added it as an example of a volunteer effort in the report. In addition to these clarifications, DOD also commented that the GAO random probability sample of 338 families relative to the total number of families accessing the system of care indicates the need for caution in drawing conclusions about accessibility of information and child care options. We agree that this sample does not represent the total number of families accessing DOD’s child care system. The sample’s analysis, which is outlined in appendix I, provides findings related to families’ off- installation child care costs at the family and child level, in conjunction with other variables such as family income, private provider fees, and the estimated fee the family would have paid for on-installation care. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. Our review focused on the following questions: (1) What are the out-of- pocket child care costs paid by military families who use Department of Defense (DOD) subsidized child care? (2) What are the barriers, if any, to obtaining DOD-subsidized care, and what has DOD done in response? To determine the out-of-pocket costs for families using DOD-subsidized off-installation care, we analyzed these costs for a random probability sample of 338 families from all four services in school year 2010. Specifically, we collected and analyzed data from family files maintained by the contractor that administers DOD’s off-installation child care subsidies, the National Association of Child Care Resource and Referral Agencies (NACCRRA). Our sample included families that participated in regularly scheduled child care through Operation Military Child Care and Military Child Care in Your Neighborhood, the two DOD-wide subsidy programs for off-installation care, as well as smaller service-specific programs: Army Child Care in Your Neighborhood, Army School-Age Program in Your Neighborhood, the Army’s Warriors in Transition program, and the Marine Corps’ San Diego Quality Improvement Program. This analysis allowed us to generalize our findings to all families receiving DOD subsidies for off-installation child care in school year 2010. NACCRRA maintains an electronic database with information that allowed us to identify all the military families it served in school year 2010. We used this database to generate our sampling frame of families. However, since this database does not include the information needed to calculate families’ out-of-pocket costs, we collected our data variables from the information contained in paper files NACCRRA maintains for each family. To create our sampling frame, NACCRRA used the database to generate a list of all families and children who received DOD subsidies for off-installation care in school year 2010. In selecting our sample, we stratified families by service (Air Force, Army, Marine Corps, and Navy) and either Active Duty or National Guard and Reserves (combined group) component. No prior data existed on the out-of-pocket child care costs of military families receiving subsidies that would allow us to calculate the variance in these costs, which we needed to determine our sample size. Thus, we collected a presample of 157 files (at least 36 from each service), calculated the out-of-pocket costs for those families, and then calculated the variance in costs for each service. We used the variance to determine the number of family files we would need to collect for the final sample in order to calculate the margin of error for each stratum to be no more than plus or minus $90. To obtain the family files for both the presample and the final sample, we provided NACCRRA with the family identification numbers for the files we had randomly selected. When NACCRRA staff had pulled the paper files for these families, we went to their offices in person to verify the files they had pulled. NACCRRA staff then scanned the documentation we needed and sent it to us in PDF form. We requested only documentation pertaining to school year 2010 (October 1, 2009 through September 30, 2010). We requested and received the following documentation from each file: Each child’s schedule of care, all provider rate sheets, all fee calculator(s) used to calculate the subsidy rate, and servicemember’s deployment orders, if applicable. Using this documentation, we input data for each family and child in Excel spreadsheets. We ensured the reliability of these hand-entered data by having two analysts enter all data, then reconciling any discrepancies between the two data spreadsheets. For the first objective, to determine the out-of-pocket costs for families using on-installation care, we obtained data from DOD and the services on the range of fees charged per child at installations in school years 2010 and 2011. Family-level cost data for on-installation care were not available. Our analysis on families’ out-of-pocket costs is limited to their weekly or monthly child care fees, although families may pay additional costs, such as fees for special events and activities. In addition, our analysis of these fee data focused on school year 2010, so that we could present data on costs for on-installation care that covered the same time period as our data on off-installation care costs. We also analyzed changes to the on-installation fee ranges in school year 2011. DOD and three of the four services provided data on weekly fees, while the fourth service (Army) provided monthly fees. We converted DOD and the other services’ weekly fees to monthly fees using the same calculation used by the Army to obtain monthly fees. Specifically, we multiplied the weekly fees by 365/7 (the number of weeks in a year) to get the yearly fee amount, and then divided by 12 to obtain the monthly fee amount. This calculation assumes a 365-day year, as well as an equal number of days in each month. Since months vary slightly in length, the monthly amount paid by families in services that charge fees on a weekly basis will also vary slightly. In addition, all Army installations allow families 2 weeks of vacation per year, during which they do not pay child care fees if their children are not in care. Army families’ monthly fees are calculated such that families pay more during the other 50 weeks of the year to make up for the 2 weeks of vacation. We recalculated Army fees without this vacation credit to make them comparable to other services’ fees, which are generally not reported in this form. To determine families’ costs for off-installation care, we collected data from a sample of 338 NACCRRA family files, as described above. We analyzed these data to determine families’ off-installation child care costs at the family and child level, in conjunction with other variables such as family income, private provider fees, and the estimated fee the family would have paid for on-installation care. We compared the averages of these variables across services. In this analysis, we weighted family and child data based on the family’s probability of being selected for the sample, which varied due to differences in the population size of each stratum (e.g., families of active duty Army servicemembers). Because we followed a probability procedure based on random selections, our sample was only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus $100). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. For this objective, we also reviewed the child care fee policies set by DOD and the services for school years 2010 and 2011, including the fees charged to families for on-installation care, any discounts offered to families, and how subsidies are calculated for off-installation care. Some aspects of the services’ fee policies, such as their methods of calculating subsidies for off-installation care, were not contained in written policies. We obtained this information through interviews with DOD and service officials. We also interviewed these officials regarding the implementation of these policies. Because external data were significant to our research objectives, we assessed the reliability of the data obtained from DOD and NACCRRA. To assess the reliability of the data on the range of fees charged by each service in school years 2010 and 2011, we interviewed officials from each service on their annual process for collecting and reviewing data on the fees charged by their installations. To assess the reliability of the NACCRRA data that we used as the sampling frame for our sample of families using off-installation care, we interviewed NACCRRA officials about their database and how they maintain it. To assess the reliability of the Defense Manpower Data Center data that we used to determine the number of servicemembers and their children who were eligible for DOD child care assistance in fiscal year 2011, we interviewed Defense Manpower Data Center officials about the reliability of the relevant data fields and also conducted electronic testing of the data. We found the data we assessed to be sufficiently reliable for the purposes of reporting the range of on-installation fees per child by service in school years 2010 and 2011, and out-of-pocket costs for families using DOD-subsidized off- installation child care in school year 2010. We found that the Defense Manpower Data Center data may not capture 100 percent of the children of servicemembers who are eligible for child care, because Guard and Reserve members are not required to report their children in the data system that populates the Defense Manpower Data Center data fields on servicemembers’ children. In addition, we found that, because the Defense Manpower Data Center data we received included members of all reserve categories, these data included some servicemembers and children who were not eligible for child care. We determined that these limitations were minor enough to allow us to report the approximate number of servicemembers eligible for DOD child care in the background of the report. To address the second objective, we reviewed relevant federal laws, policies and guidance, studies and surveys of military parents, and interviewed child and youth program officials with DOD and each of the four services, including officials at service headquarters and installations. We also interviewed representatives of NACCRRA, nonprofit organizations that support military families, and researchers knowledgeable about DOD child care programs. In addition, we visited two large military installations (Joint Base Lewis-McChord, Wash.— Army/Air Force), and Marine Corps Base Camp Lejeune, N.C.) and conducted phone interviews with officials at two additional large military installations (Naval Station Norfolk, Va., and Nellis Air Force Base, Nev.) to learn how each implements child care programs and discuss barriers faced by families in obtaining access to these programs. To obtain examples of child care programs and barriers faced by military parents at installations with no on-installation child care facilities, we also conducted telephone interviews with officials at two small installations affiliated with two services (Army’s Yakima Training Center, Wash., and Creech Air Force Base, Nev.), for a total of four phone interviews with installations. For our site visits, we selected large installations that had a Guard and/or Reserve presence and had significant deployment activity. Visiting two large installations—one each representing the Army and Marines–– provided examples of a large and a small service’s approaches to child care programs. We also selected one installation that is a joint base— having two military services on base—the Army, the lead service, and the Air Force. During our site visits we also conducted six semi-structured discussion groups with military parents at Joint Base Lewis-McChord, and five at Marine Corps Base Camp Lejeune, including parents who did and who did not have their children enrolled in DOD-subsidized child care. We also held two additional semi-structured discussions with reserve servicemembers at Camp Lejeune. During these semi-structured discussions we inquired into how parents learned about DOD-subsidized child care and any barriers they may have encountered obtaining this care. In addition, we conducted phone interviews with child care officials and military parents at two additional large military installations—Naval Station Norfolk, Va., and Nellis Air Force Base, Nev. The information obtained during these visits and through phone calls is illustrative and not representative of each service or of DOD programs as a whole. In order to select servicemembers for our small discussion groups at the two sites we visited, we provided DOD with selection criteria that they used to identify servicemembers to invite to these groups. Because we held separate discussions with officers and enlisted servicemembers, we used the same criteria for each. Our criteria were officers and enlisted servicemembers with children 12 and under who were using DOD child care—including both on-installation facilities and off-installation community care subsidies—and those not using such care. In addition, at each site we visited we also met with a small group of spouses of servicemembers with children 12 and under. Many of these spouses worked for a DOD family support office or were members of military organizations that serve spouses, such as the Marine Corps L.I.N.K.S program, which is a volunteer, mentoring program, designed by Marine Corps spouses to help family members understand and adapt to the unique challenges of military life. To assess the methodological quality of a NACCRRA study we used to support the scarcity of providers eligible to receive DOD child care subsidies, we reviewed the study’s methodology and also obtained responses from NACCRRA to questions we had about this methodology. We used a different NACCRRA study to identify differences in how states oversee and regulate private child care providers.not include a methodology, we obtained information from NACCRRA about the methodology it used to prepare this report. We also confirmed certain information NACCRRA included in the study for the two states where we performed site visits at installations in order to help verify the accuracy of this information and found no relevant discrepancies between the study and state-provided information. We found these reports to be sufficiently reliable for the purposes described above. Appendix III: Supplemental Data from Sample of Families Using DOD-Subsidized Off- Installation Care 95% margins of sampling error for these estimates range from +/- $5,197 to +/- $14,928. 95% confidence intervals for these estimates do not exceed $33,155 to $59,947. 95% confidence intervals for these estimates do not exceed $93,432 to $183,538. 95% margins of sampling error for these estimates range from +/- 5.7% to +/- 10.5%. Janet Mascia, Assistant Director, and Julianne Hartman Cutts, Analyst-in- Charge, managed this assignment. Caitlin Croake, Lauren Gilbertson, Hayley Landes, and Suzanne Rubins made significant contributions to all aspects of this report. Kate Van Gelder, Holly Dye, and James Bennett provided writing and graphics assistance. In addition, Kirsten Lauber, Terry Richardson, Jeff M. Tessin, and Cynthia Grant provided design and methodological assistance. James Rebbe provided legal assistance.
About a million military servicemembers serve the United States while raising a family, and many need reliable, affordable child care. Paying for high-quality child care can be challenging for these families, so the Department of Defense (DOD) offsets costs by subsidizing on-installation child care centers and offering subsidies for approved off-installation care providers. Deployments related to the wars in Iraq and Afghanistan increased the demand for child care. The extent of military families’ out-of-pocket child care costs for those using subsidized care are not known, and families may face barriers to obtaining DOD-subsidized care. GAO was mandated to examine: (1) the out-of-pocket child care costs paid by military families who use DOD-subsidized care; and (2) the barriers, if any, to obtaining DOD-subsidized care, and what has DOD done in response. To address these objectives, GAO reviewed DOD policies and guidance; interviewed officials from DOD, its contractor that administers DOD’s off-installation child care subsidies, and organizations that support military families; reviewed DOD fee data for school year 2009-2010 (school year 2010) and school year 2010-2011 (school year 2011); and analyzed child care costs for a random probability sample of 338 families using off-installation care in school year 2010. GAO conducted nongeneralizable discussion groups with military parents at two large military installations. GAO is not making recommendations in this report.DOD generally agreed with the report’s findings and also provided additional information on several specific points in the report. Out-of-pocket costs for military families who use DOD-subsidized child care are largely driven by policies that vary by service. DOD establishes income-based fee ranges for on-installation child care, but each service sets its own fees and discounts within these parameters. As a result, in school year 2010 the per-child costs that families from the same income categories paid for on-installation care varied by service and installation. For example, the monthly per-child cost for a family with an income of $50,000 could have ranged from $335 to $518. Families’ costs for off-installation child care through private providers are also affected by policy differences among the services. All services offer subsidies for off-installation care that are intended to make families’ costs comparable to those for on-installation care. In an effort to offer benefits to more families, some services use a fixed cap to limit the subsidy amount. In school year 2010, the Air Force and Navy capped their subsidies at $200 per child per month, and families in these services had higher average monthly costs for off-installation care than Army and Marine Corps families, and also had higher costs than what they would have paid for on-installation care. For example, on average, Navy families using off-installation care paid $87 more per month than they would have paid for on-installation care, while Army families paid $63 less. Other factors, such as the number of children in care, also contributed to families’ costs for off-installation care. DOD and the services’ recent policy changes reduced differences among and within services in families’ costs for on-installation care, and DOD plans to further reduce these differences in the next 3 to 5 years. While the effects of these policy changes on individual families’ costs for off-installation care vary by family, families in services with fixed subsidy caps will likely continue to have higher average costs than families in services that do not. Military families face two main barriers to obtaining DOD-subsidized child care: lack of awareness and insufficient availability. According to DOD officials and based on GAO’s group discussions, some families remain unaware of subsidized child care, particularly off-installation care, despite DOD’s efforts to provide information at pre-deployment briefings, and through other outreach efforts. Families who are geographically isolated from an installation, such as reservists and recruiters, may be less likely to be aware of subsidized care. The individual services have taken steps to increase awareness of DOD-subsidized child care, such as establishing positions for professionals who educate families about child care options. However, even families who are informed about DOD-subsidized child care may face barriers obtaining it due to a lack of available space at on-installation centers and a scarcity of eligible child care providers off installation. The shortage of on-installation child care spaces resulted, in part, from heavy deployment demands, and DOD has responded by approving construction projects that it anticipates will provide over 21,000 new child care spaces using fiscal year 2008 through 2010 funding. DOD and the services have initiatives under way to increase the availability of eligible off-installation providers. In addition, DOD is developing an agencywide system that will provide servicemembers a central place to request both on-installation and off-installation child care. DOD plans to pilot the system in the spring of 2012 and intends to market it DOD-wide to servicemembers once it is fully implemented. The agency is in the process of contracting for the development of a marketing plan.
IRS began exchanging federal taxpayer data with state tax administration agencies in the 1920s, but it was not until the Tax Reform Act of 1976 that Congress declared federal tax returns and return information to be confidential. The Tax Reform Act specified IRS’ responsibilities for safeguarding taxpayer information against unauthorized disclosure while authorizing IRS to share this information with state agencies for tax administration purposes. Congress also authorized the sharing of taxpayer information with child support programs to assist with enforcement, such as locating individuals owing child support. In 1984, Congress authorized IRS to share data to support federal and state administration of other programs, such as Aid to Families With Dependent Children and Medicaid, to assist in verifying eligibility and benefits. Disclosures of federal taxpayer information to an agency are restricted to the agency’s justified need for and use of such information. Unauthorized inspection, disclosure, or use of taxpayer information is subject to civil and criminal penalties. The objective of this study was to provide the Committee with information on how federal, state, and local agencies use the taxpayer information they are authorized to obtain under section 6103. To meet our objective, we met with officials in IRS’ Office of Governmental Liaison and Disclosure, Office of Safeguards, and select IRS District Disclosure Offices. We also reviewed IRS documentation of reports submitted by federal, state, and local agencies on the safeguard procedures used to protect taxpayer information. In addition, we reviewed IRS reports of its monitoring efforts at these agencies. IRS provided us with lists of federal, state, and local agencies that had received taxpayer information during 1997 or 1998. We surveyed the agencies, asking them under what authority they received taxpayer information, how they received it, what they used the information for, and whether there were alternate sources of data they could use in lieu of taxpayer information. We also asked them about IRS’ monitoring efforts and to identify any safeguard deficiencies that have been noted during recent internal or external reviews. Copies of our questionnaires are reproduced in appendix IX. We surveyed all of the federal agencies in the Washington, D.C., metropolitan area that IRS identified as having received taxpayer information. The response rate was 100 percent from these agencies. In some cases, we sent a questionnaire to more than one contact for a particular agency. For example, for the Department of Labor, IRS identified four separate components as receiving taxpayer information. Thus, IRS gave us the names of four separate contact persons at Labor. We mailed our questionnaire to 50 agency contact persons. In our cover letter, we encouraged them to distribute copies of the questionnaire to all other entities within the agency that received taxpayer information from IRS and asked that an appropriate representative from those units return a completed questionnaire. Several agencies that had only one contact person listed by IRS returned multiple questionnaires from different units within their agencies that use taxpayer information. For example, the Department of Transportation had only one contact person to whom we mailed our questionnaire, but staff in the Department completed and returned 10 questionnaires. In total, we received 98 questionnaires from the 50 agency contacts from whom we requested information. From the list IRS provided of 215 state and local entities that had received taxpayer information, we drew a simple random probability sample of 35 entities. Each entity on the IRS list had an equal, nonzero probability of being included in the sample. Our sample, then, is only one of a large number of samples that we might have drawn because we followed a probability procedure based on random selection. Each sample could have provided different estimates; thus, we can express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95- percent confident that each of the confidence intervals in this report will include the true values in the study population. We mailed questionnaires to the contact persons at each of the selected entities. Like the federal agencies, some of the state and local agencies completed more than one questionnaire. Thirty-four of the 35 state and local agencies we surveyed returned at least one questionnaire, for a response rate of 97 percent. Given the broad scope of our study and the required time frame for completion, our audit work focused on collecting and presenting the data from the agencies and IRS. As agreed with your office, we did not verify the information that we collected. We also did not evaluate the efforts of IRS or the federal, state, and local agencies to safeguard taxpayer information. We performed our work at IRS’ National Office of Safeguards and select IRS District Disclosure Offices. Our work was done between March and August 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue. IRS provided written comments in an August 16, 1999, letter, which is reprinted in appendix X. The comments are discussed near the end of this letter. According to IRS, there were 37 federal and 215 state and local agencies that received, or maintained records containing, taxpayer information under provisions of IRC section 6103 during 1997 or 1998. We surveyed all of the 34 federal agencies in the Washington, D.C., metropolitan area that IRS identified as having received taxpayer information. In responding to our questionnaire, 3 of the 34 federal agencies—Agency for International Development, Department of Energy, and Environmental Protection Agency—indicated that they did not receive any taxpayer information during 1997 or 1998. In addition, two agencies—Equal Employment Opportunity Commission and Securities and Exchange Commission— indicated that they did not receive any taxpayer information during 1998. Among these 34 federal agencies, however, there were several that had more than one department or unit that utilized the taxpayer information received. From the list IRS provided of 215 state and local entities that were receiving taxpayer information, we drew a simple random probability sample of 35 entities. Only one of our sampled state and local entities— Alabama Department of Human Resources—indicated that it did not receive any taxpayer information in 1997, and all of them indicated that they had received taxpayer information in 1998. According to IRS officials, they generally categorize the agencies into one of the following: Child support agencies–IRS discloses certain tax return information to federal, state, and local child support enforcement agencies. Welfare/public assistance agencies–IRS discloses certain tax return information to federal, state, and local agencies administering welfare/public assistance programs, such as food stamps and housing. State tax administration/law enforcement agencies–IRS discloses certain tax return information to federal, state, and local agencies for tax administration and the enforcement of state tax laws. Federal agencies–IRS discloses certain tax return information to federal agencies for certain other purposes. The type of taxpayer information agencies receive varies in content, format, and frequency according to how agencies use the information. Agencies may receive paper copies of individual tax returns, electronic databases of IRS’ individual and business master files, or tape extracts from these files. The information can include such things as the taxpayers’ names, Social Security numbers, addresses, or wages. Table 1 shows examples of the different types of taxpayer information agencies receive. As shown in table 1, agencies receive taxpayer information in a variety of formats—for example, paper copy, electronic databases, and tape extracts. Some agencies receive this information on a regular schedule—for example, monthly, quarterly, or annually. Other agencies receive it on an as-needed basis—for example, while conducting criminal investigations. We asked the agencies we surveyed to indicate how they received taxpayer information from IRS during 1997 or 1998 and how often they received that information. Tables 2 and 3 show the survey results. Appendixes III and IV further describe the types of taxpayer information received by federal and state and local agencies, respectively; the format in which the information was received; and the frequency with which it was received, categorized by purposes for which the information might be used. In addition to the taxpayer information received from IRS, many agencies use other sources of information to fulfill their missions. We asked the agencies to indicate, in lieu of taxpayer information, what other sources of data are available that would allow them to accomplish their missions. As shown in table 4, the responses from the federal, state, and local agencies we surveyed generally fell into one of following categories: There was no other source of data available to them. They used other sources, but these other sources were less reliable than tax information. They used other sources, but these other sources were more costly to use than tax information. They used other sources in conjunction with the tax information. They did not respond to this question. Under various IRC section 6103 subsections, agencies may receive taxpayer information for one of several reasons, such as to administer state tax programs, assist in the enforcement of child support programs, or verify eligibility and benefits for various welfare and public assistance programs (e.g., food stamps or public housing). Agencies may also receive taxpayer data for use during a criminal investigation, to apprise appropriate officials of criminal activities or emergency circumstances, or to assist in locating fugitives from justice. One of the most common reasons why agencies said they received taxpayer information was their participation in the tax refund offset program. Pursuant to the IRC, agencies submitted qualifying debts, such as student loans or child support payments, for collection by offsetting the debt against the taxpayer’s refund. Seventy-five percent of the federal agencies and 15 percent of the state and local agencies in our sample indicated that they received taxpayer information for this purpose. Effective January 1, 1999, tax refund offset procedures for collecting qualifying debts were modified. The Department of the Treasury’s Financial Management Service was given the responsibility for the Federal Refund Offset Program, which was merged into the centralized administrative offset program known as the Treasury Offset Program. This program commingles tax refund information with other federal financial information (e.g., benefit payments, pensions). If a match is found when an individual has an outstanding debt and is receiving federal money in any form (e.g., tax refund, pension, or vendor payments), the individual is notified that his federal money can be withheld to pay off the debt. The source or sources of any money withheld is not revealed to the agencies, but simply the fact that an offset has been made. This information, then, is no longer identifiable as tax refund information; thus, it is no longer considered taxpayer information. Because of this change to the offset program, several agencies we surveyed indicated that they no longer needed taxpayer information. Thirty-four percent of the federal and 3 percent of the state and local agencies in our sample indicated that they are participating in the Treasury Offset Program and that they will no longer need to receive taxpayer information from IRS. We asked the agencies we surveyed to indicate how they use taxpayer information. We grouped their responses into the following categories: administering debt collection or offset program; administering tax laws; determining eligibility for welfare and public assistance programs; enforcing child support programs; conducting criminal investigations; and other purposes, such as statistical and economic research, auditing government programs, or storage of tax returns. Table 5 shows how the agencies we surveyed responded to our query about how they used the taxpayer information they received in 1997 or 1998. (App. V provides a listing of possible uses of taxpayer information received from IRS.) Before receiving taxpayer information from IRS, agencies are required to provide IRS with a detailed Safeguard Procedures Report (SPR) that describes the procedures established and used by the agency for ensuring the confidentiality of the information received. The SPR is a record of how the agency processes the federal taxpayer information and protects it from unauthorized disclosure. IRS Publication 1075 outlines what must be included in an agency’s SPR.In addition to requiring that it be submitted on agency letterhead and signed by the head of the agency or the head’s delegate, an agency’s SPR must contain information about responsible officer(s), location of the data, flow of the data, system of records, secure storage of the data, access to the data, disposal of the data, computer security, and agency’s disclosure awareness program. All federal agencies and the state welfare agencies are to submit their SPRs to IRS’ Office of Safeguards, which is to review the reports for completeness and acceptance. State taxing agencies and child support enforcement agencies are to submit their SPRs to the IRS District Disclosure Office in their respective states. Agencies are expected to submit a new SPR every 6 years or whenever significant changes occur to their safeguard program. IRS has taken steps to withhold taxpayer information from agencies if their SPRs did not fulfill the requirements set forth in IRC section 6103. Shown below are some recent examples of IRS notifying agencies that they would not be able to get taxpayer information because their SPRs were incomplete. In April 1999, IRS’ Office of Safeguards notified the Arizona Department of Economic Security that, since IRS had not received an acceptable SPR, it was recommending to IRS’ Office of FedState Relations that federal taxpayer information be withheld until the agency complied with the safeguarding requirements outlined in IRC section 6103. IRS’ Office of Safeguards further advised that it would recommend to the Social Security Administration that tax information contained in the Beneficiary Earnings Exchange Record should not be forwarded to the department. In May 1999, IRS’ Office of Safeguards notified the West Virginia Department of Health and Human Resources that additional information that IRS had requested in an earlier letter had not been provided and that it could not accept the procedures described in the department’s draft SPR as adequately protecting federal taxpayer information from unauthorized disclosure. In June 1999, IRS’ Office of Safeguards notified the Federal Bureau of Investigation that IRS was unable to accept the Bureau’s SPR as describing adequate safeguard procedures to protect federal taxpayer information from unauthorized disclosure. Agencies are also required to file a Safeguard Activity Report (SAR) annually with IRS to advise it of any minor changes to the procedures or safeguards described in their SPR. The SAR is also to advise IRS of future actions that would affect the agency’s safeguard procedures—for example, new computer equipment, facilities, or systems or the use of contractors, as permitted by law, to do programming, processing, or administrative services. Moreover, the SAR is to summarize the agency’s current efforts to ensure confidentiality and certify that the agency is protecting taxpayer information pursuant to IRC section 6103(p)(4) and the agency’s own security requirements. In addition to the SPRs and annual SARs that are sent to IRS, agencies’ OIGs may also review agency programs for safeguarding taxpayer information. For example, a March 1999 Department of Veterans Affairs (VA) OIG report outlined possible inappropriate requests for and subsequent use of taxpayer information by VA’s Health Eligibility Center because of erroneous information supplied to them by some VA medical facilities. The OIG found that a large percentage of sampled cases did not have certain required documentation on file and, consequently, should not have been referred for income matching and verification. Before we notified IRS about the VA OIG report, neither Treasury nor IRS was aware of the report or its findings. After meeting with IRS to discuss the OIG findings, VA agreed to work with IRS on corrective actions. According to IRS, federal agency OIGs are not required to notify IRS of their findings involving tax returns and return information. In July 1999, IRS issued a memorandum to federal agency OIGs asking for their assistance in working with IRS in this area. IRS is supposed to conduct on-site reviews every 3 years to ensure that agencies’ safeguard procedures fulfill IRS requirements for protecting taxpayer information. IRS’ National Office of Governmental Liaison and Disclosure, Office of Safeguards, has overall responsibility for safeguard reviews to assess whether taxpayer information is properly protected from unauthorized inspection, disclosure, or use as required by the IRC and to assist in reporting to Congress. The Office of Safeguards conducts the on- site reviews for all the federal agencies and state welfare agencies that receive taxpayer information. IRS’ District Offices of Disclosure and FedState Relations are responsible for conducting the on-site safeguard reviews at all other state and local agencies that receive taxpayer information. There are 33 district offices, 29 of which have responsibilities for overseeing the safeguard reviews at state and local agencies. As of June 1999, there were 230 professional and 24 support staff assigned to the national and district disclosure offices. (App. VIII shows the staffing levels of these offices.) In addition to overseeing the safeguarding program, the district offices have responsibility for a variety of other disclosure activities, such as responding to requests under the Freedom of Information Act or Privacy Act. According to IRS, staff from the responsible IRS office visit the agency to review the procedures established and used by the agency to protect taxpayer information from unauthorized disclosure. In addition, they assess the agency’s need for, and use of, this information. IRS staff are to meet with agency personnel, review agency records, and visit agency facilities where taxpayer information is kept. They then prepare a report detailing their assessment of the agency’s processes and ability to fulfill the requirements of IRC section 6103(p)(4). In addition to conducting the triennial safeguard reviews, IRS District Disclosure Office staff are to conduct annual “need and use” reviews at all state and local agencies involved in tax administration. These reviews are done to validate the agencies’ continued need for and use of the tax information they receive from IRS. IRS’ safeguard reviews over the last 5 years have identified discrepancies in agency safeguard procedures and made recommendations for corrections. The reviews have uncovered deficiencies with agency safeguarding procedures, ranging from inappropriate access of taxpayer information by contractor staff to administrative matters, such as the failure to properly document the disposal of information. Discrepancies found by IRS during the safeguard reviews generally were procedural deficiencies and did not result in known unauthorized disclosures of taxpayer information. In their responses to the discrepancies found and recommendations made by IRS, agencies indicated that they would institute corrective actions. (App. VII provides examples of the discrepancies found by IRS during its safeguard reviews.) As noted above, one of the discrepancies that IRS found during safeguard reviews was that some agencies that received taxpayer information were using contractor personnel in a manner that might allow them access to taxpayer information. In its Report on Procedures and Safeguards Established and Utilized by Agencies for the Period January 1 through December 31, 1998, IRS highlighted this problem to Congress. IRS found agencies using contractor personnel in setting up agency computer systems in a manner that permitted the contractors to see taxpayer information. IRS also found agencies using contractor personnel in the disposal of taxpayer information, without having agency personnel observe the process to ensure that contractor personnel did not “access” the information. One of the major changes to IRS Publication 1075 in March 1999 was the inclusion of a section devoted to the appropriateness of, and precautions with, using contractor personnel to assist an agency in fulfilling the part of its mission that requires the use of taxpayer information. Some types of administrative discrepancies found by IRS staff during safeguard reviews included, among other things, that agencies were not properly documenting what information had been agency recordkeeping systems at field offices did not always meet the statutory requirements for accountability; agencies were not properly tracking the shipment of paper documents containing federal taxpayer information; and employees were not always aware of the criminal and civil penalties that can be imposed for unauthorized inspection or disclosure. We requested comments on a draft of this report from the Commissioner of Internal Revenue. Officials representing the Assistant Commissioner for Examination and the Commissioner’s Office of Legislative Affairs provided IRS’ comments at an August 12, 1999, meeting. IRS also provided written comments in an August 16, 1999, letter, which is reprinted in appendix X. IRS was in overall agreement with the draft report and said it fairly represented the scope and use of IRC section 6103 provisions regarding safeguarding taxpayer information. IRS also provided some additional information and technical comments. Where appropriate, we made changes to this report on the basis of these comments. We are sending copies of this report to Senator Fred Thompson, Chairman, and Senator Joseph I. Lieberman, Ranking Minority Member, Senate Committee on Governmental Affairs, and Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means. We are also sending copies to the Honorable Lawrence H. Summers, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also send copies to those who request them. If you or your staff have any questions concerning this report, please contact me or Joseph Jozefczyk at (202) 512-9110. Other major contributors to this report are acknowledged in appendix XI. The Internal Revenue Service (IRS) provided us with the following list of federal agencies in the Washington, D.C., metropolitan area that received, or maintained records containing, taxpayer data under the authority of Internal Revenue Code (IRC) section 6103. In addition, IRS identified the following six entities not in the Washington, D.C., metropolitan area that received taxpayer information. These were: Army and Air Force Exchange, Dallas, TX Department of the Treasury, Bureau of Public Debt, Parkersburg, WV Navy Exchange Service Command, Virginia Beach, VA Department of the Treasury, U.S. Customs, Indianapolis, IN Department of Veteran Affairs, Fort Snelling, MN U.S. Railroad Retirement Board, Chicago, IL As agreed with your office, we did not include these six in our survey because they were located outside the Washington, D.C., metropolitan area. IRS provided us with the following list of state and local agencies that received, or maintained records containing, taxpayer data under the authority of IRC section 6103. Certain federal, state, and local agencies, and others are authorized under Internal Revenue Code (IRC) section 6103 to receive taxpayer information from the Internal Revenue Service (IRS). The following describes the agencies, bodies, commissions, and other agents authorized by IRC section 6103 subsections to obtain taxpayer information, subject to safeguarding requirements prescribed in IRC section 6103(p)(4). Disclosures of taxpayer information can be made to state taxing agencies and state and local law enforcement agencies that assist in the administration of state tax laws. Disclosures under this section are to be used only for tax administration purposes, and states must justify the need for this information and must use the data provided. Certain disclosures of taxpayer information can be made to Committees of Congress and their agents upon written request from the Chairman of the House Committee on Ways and Means, the Senate Committee on Finance, or the Joint Committee on Taxation. Taxpayer information that can be associated with, or otherwise identify (directly or indirectly), a particular taxpayer can only be furnished to the Committee when in closed executive session, unless a taxpayer otherwise consents in writing to the disclosure. Agents, such as the General Accounting Office, and certain other Committees may also receive taxpayer information under subsections (f)(3) and (4). 6103(h)(2)–Disclosures of taxpayer information can be made to the Department of Justice for proceedings involving tax administration before a federal grand jury or any proceedings or investigation that may result in a proceeding before a federal grand jury or federal or state court. 6103(h)(5)–Disclosures of the address and status of a nonresident alien, citizen, or resident of the United States to the Social Security Administration (SSA) and Railroad Retirement Board can be made for purposes of carrying out responsibilities for withholding tax under section 1441 of the Social Security Act for Social Security benefits. 6103(i)(l) and (2)–Disclosures of taxpayer and other information can be made for use in certain criminal investigations. 6103(i)(3)–Disclosures of taxpayer information can be used to apprise appropriate officials of criminal activities or emergency circumstances. 6103(i)(5)–Disclosures of taxpayer information can be made to locate fugitives from justice upon the grant of an ex parte order by a federal district court judge or magistrate. 6103(i)(7)–Disclosures of taxpayer information can be made to officers and employees of the General Accounting Office in conducting audits of IRS; Bureau of Alcohol, Tobacco and Firearms (ATF); and any agency authorized by 6103(p)(6). 6103(j)(1)–Disclosures of taxpayer information can be made to the Department of Commerce (Census and Bureau of Economic Analysis). 6103(j)(2)–Disclosures of taxpayer information can be made to the Federal Trade Commission for statistical purposes. Only corporate returns can be disclosed for legally authorized economic surveys of corporations. (According to IRS, this section is obsolete because the Federal Trade Commission no longer performs these economic surveys.) 6103(j)(5)–Disclosures of taxpayer information can be made to the Department of Agriculture for the purpose of structuring, preparing, and conducting the census of agriculture pursuant to the Census of Agriculture Act of 1997. Disclosures of taxpayer information can be made to the Department of the Treasury’s Financial Management Service (FMS) for levies related to any federal debt. IRC section 6103(l)(1) and (l)(5) allow a specific type of disclosure between IRS and SSA commonly known as the Continuous Work History Sample Program. Under this disclosure, a small sample (approximately 1%) of the U.S. population’s Social Security-related data, wage information, and self-employment data is collected and used (1) for various studies to monitor trends that may affect Social Security programs; (2) as a model to assist in determining the effects of proposed program changes, including proposed legislative or administrative changes; and (3) to assess funding requirements related to trust funds and the budget. 6103(l)(1)–Disclosures of taxpayer information can be made to the Social Security Administration and Railroad Retirement Board for the administration of the Social Security Act and the Railroad Retirement Act. The common name for this disclosure is the Administration of the Social Security Act Program. Section 6103(l)(1) is very specific as to what information may be disclosed to SSA, and part of this information may be used by SSA only for purposes of carrying out its responsibility under section 1131 of the Social Security Act. 6103(l)(2)–Disclosures of taxpayer information can be made to the Department of Labor and the Pension Benefit Guaranty Corporation for administration of titles I and IV of the Employee Retirement Income Security Act of 1974. 6103(l)(3)–Disclosures of taxpayer information can be made to any federal agency administering a federal loan program. 6103(l)(5)–Disclosures of taxpayer information can be made to the Social Security Administration for the purposes of (1) carrying out an effective return processing program pursuant to section 232 of the Social Security Act and (2) providing information regarding the mortality status of individuals for epidemiological and similar research in accordance with section 1106(d) of the Social Security Act. The common name for this disclosure is the Annual Wage Reporting Program. Section 6103(l)(5) permits SSA and IRS to work together to process and share certain information. SSA and IRS conduct a number of exchanges to identify whether employee, employer, and wage data are correct and employers are submitting information as legally required. 6103(l)(6)–Disclosures of taxpayer information can be made to federal, state, and local child support enforcement agencies for the purposes of establishing and collecting child support obligations from individuals owing such obligations, including locating such individuals. Under IRC section 6103(p)(2), in conjunction with section 6l03(l)(6), IRS has authorized SSA to make disclosures to the Office of Child Support Enforcement, a federal agency that oversees child support enforcement at the federal level and acts as a coordinator for most programs involved with child support enforcement. 6103(l)(7)–Disclosures of taxpayer information can be made to federal, state, and local agencies administering certain benefits programs for the purposes of determining eligibility for, or correct amount of, benefits under such programs. Section 6103(l)(7) states that SSA will provide its return information to other agencies to assist them with specific welfare programs. The states (and other authorized agencies) provide the names and Social Security numbers of welfare applicants or recipients, and SSA provides the authorized information, such as wages and self-employment (net earnings) and retirement income. This disclosure between SSA and the other agencies is called the Beneficiary and Earnings Data Exchange Program. A similar program, the 1099 Program, involves the disclosure of unearned income information between IRS and federal, state, and local agencies administering these programs. 6103(l)(8)–Disclosures of taxpayer information can be made by SSA to other state and local child support enforcement agencies for the same purposes as 6103(l)(6).6103(l)(9)–Disclosures of taxpayer information can be made to state administrators of state alcohol laws for use in the administration of such laws. The disclosure is limited to information on alcohol fuel producers only. 6103(l)(10)–Disclosures of specific taxpayer information relating to tax refund offsets can be made to the agency requesting such offsets in order to collect specified debts, such as student loans or child support payments. This disclosure between IRS and other agencies was known as the Tax Refund Offset Program. This program is currently undergoing a “transition.” In the past, agencies received pre-offset debtor addresses, debtor identity information, the filing status (if joint), and any payment amount to the spouse of a joint return from IRS. Effective January 1, 1999, Treasury’s Financial Management Service assumed complete responsibility for the Treasury Offset Program. Except in the case of tax refund offsets to collect child support debts, agencies are now receiving offset information under the Treasury Offset Program procedures. Tax refund offset will, in general, be blended, or amalgamated, with other Treasury “offsets,” such as salary offsets. FMS is to perform the blending and tax information is not to be identified beyond FMS, except for agencies involved in collecting child support debts. When tax refund offset information is blended and unidentifiable under the Treasury Offset Program procedures, it is no longer considered return information and section 6103(p)(4) safeguarding procedures are not required. 6103(l)(11)–Disclosures of taxpayer information can be made by SSA to the Office of Personnel Management (OPM) for the purpose of administering the federal employees’ retirement system (chs. 83 and 84 of title 5, U.S.C.). The common name for this disclosure between SSA and OPM is the Federal Employees’ Retirement System Program. It involves a computer match where OPM provides the names and Social Security numbers of federal employees participating in the federal retirement system and SSA provides the wages, self-employment earnings, and retirement income information obtained under IRC sections 6103(l)(1) and (l)(5). 6103(l)(12)–Taxpayer information can be disclosed by IRS to SSA and by SSA to the Health Care Financing Administration (HCFA) to administer the Medicare program. The common name for this type of disclosure is the Medicare Secondary Payer Project. The purpose of this disclosure is to identify the employment status of Medicare beneficiaries to determine if medical care is covered by group health plans. It permits IRS to provide SSA with identity information, filing and marital status, and spouse’s name and Social Security number for specific years for any Medicare beneficiary identified by SSA. It also permits SSA to disclose to HCFA the names and Social Security numbers of Medicare beneficiaries receiving wages above a specified amount. Additionally, it permits HCFA to disclose certain return information to qualified employers and group health plans. 6103(1)(13)–Disclosures of taxpayer information can be made to the Department of Education to administer the “Direct Student Loans” program.6103(l)(14)–Disclosures of taxpayer information can be made to U.S. Customs to audit evaluations of imports and exports, and to take other actions to recover any loss of revenue or collection of duties, taxes, and fees determined to be due and owed as a result of such audits. 6103(1)(16)–Disclosures of taxpayer information can be made by SSA to officers or employees of the Department of the Treasury, a trustee or any designated officer, employee, or actuary of a trustee (as defined in the D.C. Retirement Protection Act), for the purpose of determining an individual’s eligibility for, or the correct amount of, benefits under the District of Columbia Retirement Protection Act of 1997. 6103(l)(17)–Disclosures of taxpayer information can be made to the National Archives and Records Administration for the purposes of appraisal of records for destruction or retention. Section 6103 (m)(2), (4), (6), and (7) are not subject to 6103(p)(4) safeguarding requirements unless address and entity information is redisclosed to an agent. If redisclosed to an agent, both the agency and the agent must safeguard the information. 6103(m)(2)–Disclosures of taxpayer information can be made to federal agencies for collection of federal claims under the Federal Claims Collection Act. Section 6103(m)(2) authorizes IRS to provide the mailing addresses of taxpayers to any federal agency to locate taxpayers in an attempt to collect federal claims. The common names for this type of disclosure is Taxpayer Address Request Program or the Recovery and Collection of Overpayment Process. It involves the federal agency providing IRS with a listing of debtors, identified by Social Security number and name, and IRS then providing the agency with the same information and the latest known address. 6103(m)(4)–Disclosures of taxpayer information can be made to the Department of Education for collection of Student Loans. 6103(m)(6)–Disclosures of taxpayer information can be made to officers and employees of the Blood Donor Locator Service in the Department of Health and Human Services for the purpose of locating individuals to inform donors of the possible need for medical care and treatment relating to acquired immune deficiency syndrome. 6103(m)(7)–Disclosures of taxpayers’ mailing addresses can be made to SSA for the purpose of mailing the Personal Earnings and Benefit Estimate Statements (Social Security account statements). 6103(n)–Disclosures of taxpayer information can be made to contractors to the extent necessary and for the various activities and services related to tax administration. These disclosures can only be made by the Treasury Department, a state tax agency, SSA, and the Department of Justice and in accordance with regulations prescribed by the IRS Commissioner. 6103(o)(1)–Disclosures of taxpayer information can be made to ATF for administering certain taxes on alcohol, tobacco, and firearms. Tables II.1 and II.2 show, for the agencies we surveyed that received taxpayer information in 1997 or 1998, the authorization under which they received the information. Internal Revenue Code (IRC) section 6103 allows the Internal Revenue Service (IRS) to disclose taxpayer information to federal agencies and authorized employees of those agencies. Disclosure of taxpayer information is to be used strictly for the purposes outlined by federal statutes and in accordance with IRS policy and procedures. IRC sections 6103(h) and 6103(i) allow IRS to disclose taxpayer information to the employees and officers of any federal agency for tax administration purposes as well as for the administration of federal laws not related to tax. Under 6103(h), IRS can disclose information to the Department of Justice for federal tax investigations and to the Social Security Administration (SSA) and Railroad Retirement Board (RRB) for purposes of withholding taxes. IRC section 6103(i) allows the disclosure of information for use in federal nontax criminal investigations and other activities not related to tax administration. Table III.1 shows some types of taxpayer information disclosed and the disclosure format and frequency. IRC section 6103(j) allows IRS to disclose taxpayer information to the Departments of Agriculture and Commerce and to officers and employees of the Department of the Treasury for statistical use. Table III.2 shows the types of taxpayer information disclosed and the disclosure format and frequency. Taxpayer information provided Information returns master file (SSN, name, address) Individual master file extract (SSN, name, address, marital status, exemptions, dependents, income, and return type) Corporate income tax return information (name, address, EIN, net income or loss, assets, and gross receipts) Employment tax returns records (EIN, total compensation paid, taxable period, number of employees, total taxable wages paid, and tip income) Business master file entity (EIN, name, address, filing requirements, accounting period, and employment code) Weekly economic data and economic and agriculture census (SSN, EIN, address, receipts, accounting period, wages, interest, assets, and cost of goods) Information from application for EIN Statistics of income corporate sample (credits, balance sheet, income statement, and tax items) Under IRC section 6103(l), disclosures can be made to certain federal agencies for purposes other than for tax administration. Disclosure of taxpayer information can be made to any federal agency administering a federal loan program, as well as to those federal agencies administering certain programs under the Social Security Act, the Food Stamp Act of 1977, title 38 U.S.C., or certain other housing assistance and benefits programs. Disclosures can also be made to SSA, RRB, the Pension Benefit Guaranty Corporation, and the Department of Labor for the administration of the Employee Retirement Income Security Act of 1974 and for carrying out a return processing program. The Veterans Health Administration, Veterans Benefits Administration, and Department of Housing and Urban Development also receive federal taxpayer information from SSA and IRS under the authority of IRC section 6103(l)(7) for use in administering programs authorized under title 38 and certain housing assistance programs. SSA also receives unearned income information from IRS, which it uses in administering the Supplemental Security Income program. Additionally, IRC section 6103(l) allows disclosure by SSA to the Health Care Financing Administration and to certain other agencies for determining eligibility for, or the correct amount of, benefits. Table III.3 shows the types of taxpayer information disclosed and the disclosure format and frequency. Taxpayer information provided Form 8300 information Tax liability and delinquency information W-2s and W-3s (wage data submitted by employers) Unearned income from various Form 1099s Wages, self-employment earnings and retirement income SSN, filing and marital status, taxpayer name, addresses, employee EINs Individual income tax return information (SSN, filing status, amount and nature of income, number of dependents) IRC section 6103(m) allows the disclosure of taxpayer information for collecting federal claims and for locating registered blood donors. All federal agencies can receive the information for collection of claims, such as student loans, under the Federal Claims Collection Act. The Department of Health and Human Services receives the taxpayer information as part of its Blood Locator Service, for the purpose of locating donors. IRC section 6103(o) allows disclosures of the collection of certain taxes on alcohol, tobacco, and firearms. Table III.4 shows the types of taxpayer information disclosed and the disclosure format and frequency. Under the provisions of Internal Revenue Code (IRC) section 6103(d), the Internal Revenue Service (IRS) is authorized to make disclosures for state tax administration purposes to state tax officials and state and local law enforcement agencies. In general, taxpayer information can be disclosed to any state agency, body, or commission, or its legal representative for the administration of state tax laws, including for locating any person who may be entitled to a state income tax refund. Table IV.1 shows some of the types of taxpayer information disclosed and the disclosure format and frequency. In addition to the types of taxpayer information shown in table IV.1, in some states, the Attorney General’s Office receives inheritance tax and estate tax information from IRS, including tax credits and closing letters to taxpayers. This type of taxpayer information is disclosed quarterly on hard copy or magnetic tape. In certain states, such as Texas, that have no state income tax, the State Comptroller’s Office—which is responsible for collecting state sales and inheritance taxes—receives taxpayer information from IRS. The taxpayer information consists of estate and gift tax audit reports and income information, such as Form 1099s, on hard copy or magnetic tape, and transcripts of business returns. This information is received on an ongoing, as well as on a case-by-case, basis. The state of Wyoming also does not have an income tax, but its department of transportation enforces fuel tax laws. IRS provides Wyoming with fuel tax adjustment results on hard copy and only upon specific request. Some cities, such as St. Louis and Kansas City, levy an income-based tax on their residents and those taxpayers that work in the city. These cities receive income tax audit reports from IRS when adjustments are made to wages or self-employment income. This information is received quarterly on hard copy. IRC section 6103(l)(6) allows IRS to disclose taxpayer information to state and local child support enforcement agencies. In general, taxpayer information can be disclosed to any state or local child support enforcement agency for establishing and collecting child support obligations, including any procedure for locating individuals owing such obligations. IRC section 6103(l)(8) permits the Social Security Administration (SSA) to disclose certain taxpayer information to state and local child support enforcement agencies. However, section (l)(6) also permits the disclosure of the same information, and more, to federal, state, and local agencies. Currently, SSA is not making any disclosures of taxpayer information to state and local child support enforcement agencies under 6103 section (l)(8), but is making disclosures to the federal Office of Child Support Enforcement (OCSE) on behalf of IRS. OCSE provides the names and, if known, Social Security numbers. SSA performs computer matches and provides Social Security numbers from SSA records, the last known address from SSA records, and the address of the last known employer from W-2 and W-3 taxpayer information. OCSE then provides the information to the state and local child support enforcement agencies. Table IV.2 shows the other types of taxpayer information disclosed and the disclosure format and frequency. Under IRC section 6103(l)(7), disclosures can be made to state and local agencies administering certain programs under the Social Security Act, the Food Stamp Act of 1977, title 38 U.S.C., or certain other housing assistance and benefits programs. The Deficit Reduction Act of 1984 required state public assistance agencies administering certain programs under the Social Security Act or the Food Stamp Act of 1977 to establish an income eligibility verification system. These agencies receive federal taxpayer information under the authority of the IRC 6103(l)(7) from SSA and IRS to be used solely for the purpose of, and to the extent necessary in, determining eligibility for, or the correct amount of benefits,under, the specified programs. The agencies receive wage and self-employment information from SSA through electronic transmissions and unearned income information (Form 1099s) from IRS through magnetic tapes. Table IV.3 shows the type of information disclosed and the disclosure format and frequency. Internal Revenue Code (IRC) section 6103 is very specific about the authorized use of any federal taxpayer data. During our study, Internal Revenue Service (IRS) officials and other federal and state officials indicated that there are many possible authorized uses for tax returns and return information in accordance with IRC section 6103 requirements. Agency officials stated that taxpayer information is used for tax administration and law enforcement purposes, for the administration of federal laws not related to tax administration, for statistical uses, for establishing and collecting child support obligations, and for determining eligibility for benefits. Table V.1 outlines some of the specific uses of federal taxpayer information. Possible use Tax administration and tax withholding purposes Criminal investigation and litigation Reporting criminal activities Judicial or administrative procedures Enforce federal criminal or civil statutes Locate fugitives from justice Conducting government program audits Statistical purposes Offsets Storing and maintaining data for IRS Administration of welfare and public assistance programs Collection and enforcement of child support Verify taxpayer filed original or amended return and initiate state audit Initiate state penalty investigation Audit selection Provide listing of alleged violators of criminal tax laws Verify or update addresses Skip tracing Sales tax matching Identify nonfilers Determine discrepancies in reporting of income Identify S corporation shareholders who avoid state tax by taking dividends in lieu of wages Statistical and revenue forecasting Identify payers and employers not reporting to state and determine underreporters Identify partnerships with changes in number of partners to detect possible sale of partnership interest Compare officers’ salaries and total wages paid on corporate returns to withholding tax filed Compare federal tax withheld to state tax withheld Locate delinquent taxpayers Identify out-of-state income (Continued) As a condition of receiving taxpayer information, agencies must show, to the satisfaction of the Internal Revenue Service (IRS), that their policies, practices, controls, and safeguards adequately protect the confidentiality of the taxpayer information they receive from IRS. The agencies must ensure that the information is used only as authorized by statute or regulation and disclosed only to authorized persons. IRS has implemented specific guidelines that all federal, state, and local agencies are to follow to properly safeguard taxpayer information. These guidelines, outlined in IRS Publication 1075, Tax Information Security Guidelines for Federal, State and Local Agencies, are summarized below. Federal, state, and local agencies, and other authorized recipients, may request taxpayer information from IRS in the form of a written request signed by the head of the requesting agency or other authorized official. IRS also requires that a formal agreement—a Safeguard Procedures Report—be provided by the agency that specifies the procedures established and used by the agency to prevent unauthorized access and use and describes how the information will be used upon receipt. The Safeguard Procedures Report should be submitted to IRS at least 45 days before the scheduled or requested receipt of taxpayer information. Any agency that receives taxpayer information for an authorized use under Internal Revenue Code (IRC) section 6103 may not use the information in any manner or for any purpose not consistent with that authorized use. If an agency needs federal tax information for a different authorized use under a different provision of IRC section 6103, a separate request under that provision is necessary. An unauthorized secondary use is specifically prohibited and may result in discontinuation of disclosures to the agency and in the imposition of civil or criminal penalties on the responsible officials. Before granting agency officers and employees access to taxpayer information, officers and employees should certify that they understand security procedures and instructions requiring their awareness and compliance. Employees should be required to maintain their authorization to access taxpayer information through annual recertification. As part of the certification and at least annually, employees should be advised of the provisions of IRC 7213(a), 7213A, and 7431. Agencies should make officers and employees aware that disclosure restrictions and the penalties apply even after employment with the agency has ended. Taxpayer information may be obtained by state tax agencies from IRS only to the extent the information is needed, and is reasonably expected to be used, for state tax administration. Some state disclosure statutes and administrative procedures permit access to state tax files by other agencies, organizations, or employees not involved in tax matters. IRC 6103(d) does not permit access to taxpayer information for purposes other than for state tax administration. State and local tax agencies are not authorized to furnish taxpayer information to other state agencies, tax or nontax, or to political subdivisions, such as cities or counties, for any purpose, including tax administration. State and local tax agencies may not furnish taxpayer information to any other states, even where agreements have been made, informally or formally, for the reciprocal exchange of state tax information. Also, nongovernment organizations, such as universities or public interest organizations performing research, cannot have access to taxpayer information. Statutes that authorize disclosure of taxpayer information do not authorize further disclosures. Unless IRC section 6103 provides for further disclosures by the agency, the agency cannot make such disclosures. Each agency must have its own exchange agreement with IRS or with the Social Security Administration (SSA). When an agency is receiving data under more than one section 6103 authorization, each exchange or release of taxpayer information must have a separate agreement. An agency’s records of the taxpayer information it requests should include some account of the result of its use or why the information was not used. If an agency receiving taxpayer information on a continuing basis finds it is receiving information that, for any reason, it is unable to utilize, it should contact IRS to modify the request. Federal, state, and local agencies authorized under IRC section 6103 to receive taxpayer information are required by IRC section 6103 (p)(4)(A) to establish a permanent system of standardized records of requests made by or to them for disclosure of the information. The records are to be maintained for 5 years or for the applicable records control schedule, whichever is longer. When taxpayer information is received in electronic form, authorized employees of the recipient agency must be responsible for securing magnetic tapes or cartridges before processing and ensuring that the proper acknowledgment form is signed and returned to IRS. Tapes containing federal taxpayer information, any hard-copy printout of a tape, or any file resulting from the processing of a tape is to be recorded in a log that identifies (1) date received; (2) reel or cartridge control number contents; (3) number of records; (4) movement; and (5) if disposed of, the date and method of disposition. Taxpayer information, other than that in electronic form, must be maintained by (1) taxpayer name; 2) tax year(s); (3) type of tax return or return information; (4) reason for the request; (5) date requested; (6) date received; (7) exact location of the taxpayer information; (8) who has had access to the data; and (9) if disposed of, the date and method of disposition. If the agency has the authority to make further disclosures, information disclosed outside the agency must be recorded on a separate list that reflects to whom the disclosure was made, what was disclosed, and why and when it was disclosed. IRS has categorized taxpayer and privacy information as high-security items. Security for a document, item, or an area may be provided by locked containers of various types, vaults, locked rooms, locked rooms with reinforced perimeters, locked buildings, guards, electronic security systems, fences, identification systems, and control measures. The required security for taxpayer information received depends on the facility, the function of the agency, how the agency is organized, and what equipment is available. Agencies receiving taxpayer information are required to establish a uniform method of protecting data and items that require safeguarding. The Minimum Protection Standards System, which is utilized by most agencies, has been designed to provide agencies with a basic framework of minimum-security requirements. Since some agencies may require additional security measures, they should analyze their individual circumstances to determine the security needs at their facility. Care must be taken to deny access to areas containing taxpayer information during normal working hours. This can be accomplished by restricted areas, security rooms, or locked rooms. In addition, taxpayer information in any form (computer printout, photocopies, tapes, notes, etc.) must be protected during nonworking hours. This can be done through a combination of methods, including a secured or locked perimeter or secured area. When it is necessary to move taxpayer information to another location, plans must be made to properly protect and account for all of the information. Taxpayer information must be in locked cabinets or sealed packing cartons while in transit. Accountability should be maintained to ensure that cabinets or cartons do not become misplaced or lost. The handling of taxpayer information and tax-related documents must be such that the documents do not become misplaced or available to unauthorized personnel. Only those employees who have a need to know and to whom disclosures may be made under the provisions of the statute should be permitted access to information. In the event that taxpayer information is hand-carried by an individual in connection with a trip or in the course of daily activities, it must be kept with that individual and protected from unauthorized disclosure. Data stored and processed by computers and magnetic media should be physically secured and controlled in a restricted access area. If the confidentiality of the taxpayer information can be adequately protected, alternative work sites, such as employees’ homes or other nontraditional work sites, can be used. Despite location, taxpayer information remains subject to the same safeguard requirements and the highest level of attainable security. Agencies are required by IRC 6103(p)(4)(C) to restrict access to taxpayer information only to persons whose duties or responsibilities require access. Taxpayer information should be clearly labeled “federal tax information” and handled in such a manner that it does not become misplaced or available to unauthorized personnel. Access to taxpayer information must be strictly on a need-to-know basis. Information must never be indiscriminately disseminated, even within the recipient agency. Agencies must evaluate the need for taxpayer information before the data are requested or disseminated. An employee’s background and security clearance should be considered when designating authorized personnel. No person should be given more taxpayer information than is needed to perform his or her duties. To avoid inadvertent disclosures, it is recommended that taxpayer information be kept separate from other information to the maximum extent possible. In situations where physical separation is impractical, the file should be clearly labeled to indicate the taxpayer information is included and the file should be safeguarded. Any commingling of data on tapes should be avoided. Processing of taxpayer information in magnetic media format, microfilms, photo impressions, or other formats should be performed by agency- owned and -operated facilities, or contractor or agency shared facilities. All systems that process taxpayer information must meet the provisions of OMB Circular A-130, appendix III and Treasury Directive Policy 71-10. The Department of Defense Trusted Computer System Evaluation Criteria (DOD 5200.28-STD), commonly called the “Orange Book,” should be used as the basis for establishing systems that process taxpayer information. All computer systems processing, storing, and transmitting taxpayer information must have computer access protection controls (controlled access protection level C-2). To meet C-2 requirements, the operating security features of the system must have (1) a security policy, (2) accountability, (3) assurance, and (4) documentation. Agencies should assign overall responsibility to an individual (security officer) who is knowledge about information technology and applications. This individual should be familiar with technical controls used to protect the system from unauthorized entry. The two acceptable methods of transmitting taxpayer information over telecommunications devices are encryption and the use of guided media. Encryption involves the altering of data objects in a way that the objects become unreadable until deciphered. Guided media involves the use of protected microwave transmissions or the use of end-to-end fiber optics. Connecting the agency’s computer system to the Internet will require “firewall” protection to reduce the threat of intruders accessing data files containing taxpayer information. Agencies receiving taxpayer information from IRS are also required to conduct internal inspections. The purpose of these inspections is to ensure that adequate safeguard and security measures are maintained. Agencies should submit copies of these inspections to IRS with their annual Safeguard Activity Report. IRC section 6103 (p)(4)(E) requires agencies receiving taxpayer information to file a report that describes the procedures established and used by the agency for ensuring the confidentiality of the information received from IRS. The Safeguard Procedures Report is a record of how taxpayer information is to be processed and protected from unauthorized disclosure. Agencies should submit a new Safeguard Procedures Report every 6 years or whenever significant changes occur in their safeguard program. Agencies must file an annual Safeguard Activity Report, which advises IRS of changes to the procedures or safeguards described in the Safeguard Procedures Report. The Safeguard Activity Report also (1) advises IRS of any future actions that will affect the agency’s safeguard procedures, (2) summarizes the agency’s current efforts to ensure the confidentiality of the taxpayer information, and (3) certifies that the agency is protecting taxpayer information in accordance with IRC section 6103 requirements and the agency’s own security requirements. A safeguard review is an on-site evaluation of the use of federal tax information received from IRS and the measures used by the receiving agency to protect that data. IRS conducts on-site reviews of agency safeguards regularly. Reviews of state and local agencies are conducted by IRS District Disclosure personnel. Reviews of federal agencies and state welfare agencies are conducted by the IRS Office of Governmental Liaison and Disclosure, Office of Safeguards. IRS safeguard reviews cover the six requirements of IRC section 6103(p)(4), which are (1) recordkeeping, (2) secure storage, (3) restricting access, (4) other safeguards, (5) reporting requirements, and (6) disposal. Agencies are required by IRC section 6103(p)(4)(F) to take certain actions upon completion of their use of taxpayer information in order to protect its confidentiality. Agency officials and employees should either return the information, and any copies, or make the information “undisclosable” and include in the agency’s annual report a description of the procedures used. If the agency elects to return the information, a receipt process should be used. Taxpayer information should never be provided to agents or contractors for disposal unless authorized by the IRC. The Internal Revenue Service (IRS) routinely conducts on-site reviews of agencies’ safeguard procedures to ensure that the procedures fulfill IRS requirements for protecting taxpayer information from unauthorized disclosure. After completing the review, IRS prepares a report of its findings and recommendations and sends the report to the agency for comment. Upon receiving the agency’s comments, IRS annotates its report to indicate whether it accepts responses as correcting any discrepancies reported. The following excerpts are examples of the findings, discussions, recommendations, agency responses, and IRS comments found in recent IRS reports of safeguard reviews. The agency permitted a number of contractors to have access to return information. Some of the contractors are authorized to have access, while others are not. Also, when contractor access was authorized, the agency was not always including “safeguarding” clauses in all contracts. The agency uses hundreds of contractors. Internal Revenue Code (IRC) section 6103 generally does not authorize contractors to have access to federal taxpayer information. Certain exceptions exist, such as section 6103(n), which permits contracts for tax administration purposes, and section 6103(m)(2) and (7), which permit disclosures for the collection of federal debt and for the mailing of personal earnings and benefits estimate statements, respectively. However, there is not an exception for the purposes of administering the agency responsibilities under the act, nor for most other IRC section 6103 authorized disclosures. The agency uses contractors for the printing of the personal earnings and benefits estimate statements and has included a “safeguarding” clause, which requires that the contractor’s employees be made aware of the taxpayer information, its restricted access and use, and the penalty provisions for unauthorized access or use. The agency also uses a contractor for developing microfilm with taxpayer information. This contractor is authorized access, but the contract does not contain “safeguarding” language relating to taxpayer information. It does have confidential clauses relating to the Privacy Act provisions. The agency has also contracted out for the disposal of the paper Form W- 2s and W-3s received. An earlier contract allowed for the contractor to shred the material to 2-inch strips or less, which does not meet the IRS required standard of 5/16-inch or less for shredding. The current contract states that all material will be totally destroyed beyond legibility or reconstruction through shredding, maceration, or pulping. However, a visit to the contractor’s site revealed that the contractor is shredding material, but not always to the original 2-inch requirement. The required “safeguarding” clauses are not in the contract, and the employer is not advising his employees of the confidentiality and penalties associated with accessing taxpayer information. Many other storage, retrieval, and disposal activities are contracted out by the agency. Two units of the agency use contractors to conduct most of the activities at their facilities, where beneficiary files (with taxpayer information) are stored in open boxes. This is also true of the records center that the agency contracts with to store, dispose of, and retrieve millions of beneficiary files. Other units of the agency are also contracting out for disposition of information. IRC section 6103 does not authorize these contractors to have access to taxpayer information, which they do. In order to comply with IRC section 6103 and with IRS standards, the agency needs to review its use of contractors. When contractors are authorized to have access to taxpayer information, the agency needs to ensure that “safeguarding” clauses are included in the contracts. When contractors are not authorized access to this information, the agency needs to ensure that it is not permitting such access. Specific examples include adding the safeguarding clauses to the microfilm development contract; adding the safeguarding clauses to the contract for the disposal of paper return information, mainly W-2s and W-3s; ensuring that disposal methods meet IRS standards; developing policies and procedures to ensure that contractors who are not authorized to have access do not have access; and making units and field offices aware of “unauthorized access” by contractors. The agency agreed that safeguarding clauses need to be included in contracts when contractors are authorized to have access to taxpayer information and that contractors should not have access unless authorized. IRS was still being reviewing this agency’s safeguard report and had not finalized its comments at the time we prepared our report. The recordkeeping system at the agency’s field offices does not meet all of the statutory requirements for tax information accountability. When federal tax returns or return information are received, agencies are required to maintain a record of taxpayer name, tax year(s), type of information, reason for request, date requested, date received, exact location of data, and who has had access to the data. Further, if and when the data are disposed of, agencies are required to maintain a record of the date and method of disposition. Agency field offices maintain a system of records for tracking documents and evidence obtained during a criminal investigation. Returns and return information are generally placed in an evidence envelope and associated with the case files, which are kept in the office’s filing area. The envelope is annotated as to contents and any additional descriptive information the case agent may write down. The agency’s system of standardized records contained many of the required items listed above, but not all of them. Further, tax documents controlled by the agency’s seizure team unit may not necessarily show who has had access to the information. Since information used to track returns and return information is dependent upon information furnished by the case agent, the agency should ensure that the agents are aware of the elements required to meet the statutory requirements for tracking federal tax data. Also, the seizure team unit may wish to consider using some type of “charge-out” form to record accesses to tax information. The agency uses a central recordkeeping system for maintaining all investigative files. The system is outlined in the Federal Register. During IRS’ review, access to information by the IRS team was limited to the federal tax return and return information contained in the evidence envelope, and not to the entire file. Information regarding the taxpayer name, tax year(s), reasons for request, and data requested is contained in the case file and supplied to IRS during the request for the information. The date received and type of information is maintained in the evidence log. Access to case information is restricted based on the need-to-know and to individuals having a file on the case. Agency procedures used for controlling access to federal tax return and return information within the seizure team unit are the same procedures used for investigative information. Information is restricted to individuals with a role in the asset forfeiture. Along with the agency’s response, the appropriate Federal Register cite was provided. The agency’s response was accepted. Agency employees that have access to federal tax data are not aware of the criminal and civil penalties that can be imposed for unauthorized disclosure of the data. IRS Publication 1075 requires that, as part of an agency’s employee awareness program, each employee that has access to federal tax data should receive copies of IRC sections 7213(a) and 7431, which describe the criminal and civil penalties applicable to the unauthorized disclosure of federal tax data. In addition, employees must be advised at least annually of these provisions. Personnel that IRS’ review team talked with could not recall receiving copies of the IRC penalty provisions. Employees receive periodic reminders about protecting sensitive information; however, they are not specifically reminded of the provisions of IRC sections 7213(a) and 7431. All employees that are authorized to have access to federal tax data should receive a copy of IRC section 7213(a) and 7431, and they should be reminded at least annually of the criminal and civil penalties that can be imposed under the IRC for the unauthorized disclosure of federal tax data. Although employees were not specifically aware of the penalties for unauthorized disclosure of federal tax data as contained in the IRC, agency employees knew about the penalties for unauthorized disclosure of information contained in investigative files. The revised IRS Publication 1075 now contains penalty provisions in exhibits 3 and 4. Along with the agency response, IRS received a copy of Security Bulletin 96-03 with attachments A-2 and A-3, with instructions that the information in the document be reviewed annually by all personnel who have access to tax return and return information provided to the agency by IRS. Observance of Security Bulletin 96-03 will satisfy the IRS requirement. The last Safeguard Activity Report for this agency was dated June 29, 1995—2 years before the review. Also, the report did not contain the information as required in IRS Publication 1075. Additionally, IRS records showed the last Safeguard Procedures Report was submitted in 1988. The statute requires reports to be furnished to IRS describing the procedures established and utilized to ensure the confidentiality of tax data received from IRS. After the submission of the Safeguard Procedures Report, a written Safeguard Activity Report is to be submitted annually to give information regarding the agency’s safeguard program. The Safeguard Procedures Report should be updated as changes occur, and a new report submitted when warranted. A Safeguard Activity Report must be submitted to IRS no later than January 31 each year. The report must contain the required information as shown in IRS Publication 1075. Because of changes within the agency since 1988, a current Safeguard Procedures Report was requested. The agency responded that it would comply with all reporting requirements. It assigned its internal audit unit the annual inspection as required by IRS Publication 1075 and planned to submit the Safeguard Activity Report. The agency submitted an updated Safeguard Procedures Report. IRS accepted the response, but explained to the agency that the Safeguard Procedures Report was not a “one-time” report and that it should be updated as changes occur and a new one submitted when warranted. IRS requested that a revised version be submitted reflecting changes made as a result of IRS’ review. The agency’s records did not list some employees who were receiving and using taxpayer information to determine Medicaid eligibility. The Deficit Reduction Act of 1984 requires states to have an income and eligibility verification system for use in administering certain benefits programs. State welfare agencies are required to obtain and use unearned income data from IRS and other wage and income data from SSA in the verification process of these benefits programs. Accordingly, IRC section 6103 authorizes the disclosure of taxpayer information to federal, state, and local agencies by IRS or SSA for use in the administration of these benefits programs. As a condition of receiving taxpayer information, state welfare agencies are required to maintain a permanent system of standardized records that documents all requests for, receipt of, and disclosures of taxpayer information made to or by the agencies. During its review of this agency, IRS found that, while some employees acknowledged using taxpayer information, the agency’s records did not list the employees as having received taxpayer information. IRS found that taxpayer information, in the form of a printout, was being disclosed to Medicaid technicians who are stationed at various state hospitals. The technicians receive the information to determine Medicaid eligibility for applicants who were hospitalized. Upon receipt from the agency’s mailroom, the printout is accompanied by an acknowledgment form that employees must sign, indicating receipt of taxpayer information. IRS found that technicians were properly signing the acknowledgment form and returning it to the mailroom to indicate receipt of the information. However, the agency’s records did not reflect that taxpayer information was being disclosed from the agency to its employees located at these various state hospitals. The state hospitals that get taxpayer information should be included so that the agency’s records reflect a complete and accurate listing of all requests, receipts, and disclosures of taxpayer information. The Medicaid technicians are stationed at the state hospitals at various times. For this reason, any disclosure of taxpayer information to these hospitals will be managed by an agency coordinator. To improve recordkeeping, the coordinator will provide a listing of the disclosures, and this list, along with the agency acknowledgment forms, will be maintained in the standardized records. The General Services Mail and Distribution Manager will ensure that the records are received. Agency’s response was acceptable. Table VII.1 summarizes some of the other deficiencies found during IRS’ on-site safeguard reviews of federal, state, and local agencies. Specific deficiency noted No system exists for ensuring that all keys to secure areas are accounted for or that access to keys is restricted. No records exist of when taxpayer information was received and destroyed, or of how the information was destroyed. Taxpayer information locked in the supervisor’s office, but not in locked containers or file cabinets, which would properly protect the information from inadvertent or unauthorized disclosure. Agency mailroom not secure during nonduty hours, and employees are leaving taxpayer information unsecured, in unlocked containers. No reconciliation of transmittal documents to actual receipts and shipments of federal return information. There was not adequate protection for tax information. There was no agency requirement that containers be locked, and some containers cannot be locked. There was not a specific individual responsible for physical security. Ground floor entrances were not locked during office hours, and there was a need for “Employee Only” signs. IRS tapes and income and eligibility verification system documents were transported via unsecured courier service. Tax information was combined with nontax information and accessible by other employees not directly involved in program. Several federal tax documents were found that were not labeled as such. Agency was sharing taxpayer information with other state agencies and contractors that are not authorized to receive information. Agency was using an unauthorized method of destroying taxpayer information. Existing procedures for repairs to equipment do not appear to address removal of federal return information before repairs are made. Agency was not utilizing proper destruction procedures for taxpayer information that is no longer being used. Computer systems containing tax information do not display warning banners reminding employees of safeguarding requirements and associated penalties. Agency was not promptly removing from the system employees that no longer needed access to taxpayer information. Taxpayer data was not transmitted through secure communication lines to prevent unauthorized use or access. Unsecured dial-in modems were being used for taxpayer information on agency systems, and information on the mainframe was not adequately restricted. Employees were not properly trained on all aspects of safeguarding tax information. Some were not aware of the civil and criminal penalties associated with unauthorized disclosure or of the Taxpayer Browsing Act. Internal security inspections were not conducted, or the results were not documented. There was no documentation of corrective actions, if any were taken. The agency needs to post signs and send memos to remind employees of their responsibility to safeguard federal tax information. Listed below are the staffing levels, as of June 1999, for IRS’ national and district offices that are responsible for IRS’ safeguarding program. In addition to overseeing the safeguarding program, the district offices have responsibilities for a variety of other disclosure activities. These activities include, among other things, conducting disclosure awareness seminars for state and local agency personnel, processing Freedom of Information Act and Privacy Act requests, processing ex parte orders for grand jury or federal criminal investigations, testifying in federal court to certify that certain documents are true copies of tax return information, and reviewing subpoenas served to IRS personnel to advise them of what they can and cannot disclose in court. In addition to those named above, Michelle Bowsky, John Gates, Tim Outlaw, Anne Rhodes-Kline, Kirsten Thomas, and Carrie Watkins made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO assessed the disclosure practices and safeguards employed by the Internal Revenue Service (IRS) and other federal, state, and local agencies to protect taxpayer information, focusing on: (1) which federal, state, and local agencies receive taxpayer information from IRS; (2) what type of information they receive; (3) how the taxpayer information is being used; (4) what policies and procedures the agencies are required to follow to safeguard taxpayer information; (5) how frequently IRS is to monitor agencies' adherence to the safeguarding requirements; and (6) the results of IRS' most recent monitoring efforts. GAO noted that: (1) there were 37 federal and 215 state and local agencies that received, or maintained records containing, taxpayer information under provisions of section 6103 during 1997 or 1998; (2) the information that agencies received included, among other things, the taxpayers' names, Social Security numbers, addresses, and wages; (3) the information came in a variety of formats; (4) some agencies received the information on a regular schedule; (5) others received the information on an as-needed basis, such as while conducting criminal investigations; (6) federal, state, and local agencies said they used taxpayer information for one of several purposes, such as administering state tax programs, assisting in the enforcement of child support programs, verifying eligibility and benefits for welfare and public assistance programs, and conducting criminal investigations; (7) before receiving taxpayer information from IRS, agencies are required to advise IRS how they intend to use the information and to provide IRS with a detailed safeguard plan that describes the procedures established and used by the agency for ensuring the confidentiality of the information they want to receive; (8) these safeguard plans are supposed to be updated every 6 years or if significant changes are made to the agencies' procedures; (9) agencies are also required to submit annual reports to IRS summarizing their efforts to safeguard taxpayer information and any minor changes to their safeguarding procedures; (10) in addition to providing IRS with safeguarding plans and annual reports, agencies' Offices of Inspector General may also review internal agency programs for safeguarding restricted or classified information; (11) IRS conducts on-site reviews to ensure that agencies' safeguard procedures fulfill IRS requirements for protecting taxpayer information; (12) IRS' National Office of Governmental Liaison and Disclosure, Office of Safeguards, has overall responsibility for safeguard reviews to assess whether taxpayer information is properly protected from unauthorized use or access as required by the Internal Revenue Code and to assist in reporting to Congress; (13) IRS' safeguard reviews have identified discrepancies in agency safeguard procedures and made recommendations for corrections; and (14) the reviews have uncovered problems with agency safeguarding procedures, ranging from inappropriate access to taxpayer information by contractor staff to administrative matters, such as the failure to properly document the disposal of information.
Although it is still a small part of the U.S. economy, electronic commerce is growing rapidly. For example, according to the U.S. Census Bureau, retail electronic commerce dollar volume, though less than 1 percent of overall U.S. retail sales, increased in all but two of the last six quarters.Moreover, while precisely predicting future electronic commerce volume is difficult, in June 2000 we reported that business-to-consumer Internet sales were estimated to increase to between $78 billion and $143 billion in 2003, and that business-to-business Internet sales were estimated to increase to between about $1.5 and $2.2 trillion in that same timeframe.According to GartnerGroup, a private research firm, through 2006 the pace of innovation will increase as enterprises institutionalize electronic business, and small businesses “must embrace this transition or risk their long-term viability and survival.” The federal government is taking steps to increase its use of electronic commerce, particularly in the area of conducting procurements on-line. For example, the President has designated expanding the application of on-line procurement a major reform for fiscal year 2002. Further, according to a recent Congressional Research Service report, agency Web sites provided various information on federal procurement, including bid opportunities. Moreover, procurement opportunities for small businesses and for women- and minority-owned businesses were also often identified on these Web sites. Among the major federal agencies maintaining procurement Web sites are DLA, GSA, and the National Aeronautics and Space Administration. One type of on-line procurement program is a multivendor Internet-based purchasing site, sometimes called an “electronic mall.” An example of an electronic mall is GSA Advantage!, in which government buyers can search listings, compare prices, and purchase items on-line much as a private individual might purchase an item from an on-line retailer. As of July 1, all vendors on the GSA schedule were required to electronically submit product descriptions and price information to GSA Advantage!. Another electronic mall is DLA’s Defense Medical Logistics Standard Support (DMLSS) E-CAT program, which operates in a similar manner to GSA Advantage!, except that vendors must have an indefinite delivery/indefinite quantity contract with DLA to participate. A different type of on-line procurement program model is GSA’s Information Technology Solutions Shop (ITSS) program, which is used for larger or more complex purchases. The ITSS on-line purchasing program maintains an inventory of contractors through which federal buyers can get quotations in response to requirements documents. Table 1 summarizes how each of these on-line programs works and the products that can be obtained using them. These three on-line procurement programs are small but growing in comparison to overall federal procurement dollars. According to the Federal Procurement Data System (FPDS), the government procured about $232 billion and $209 billion in goods and services in fiscal years 2000 and 1999, respectively. The three on-line programs in our review grew as a percentage of total federal procurement dollars from about 0.5 percent in fiscal year 1999 to about 1 percent in fiscal year 2000. Table 2 shows actual and estimated dollar volumes for the three programs and their growth over three fiscal years. Other on-line procurement Web sites also support government purchasing. These sites include the Department of Defense’s (DOD) EMALL program, which is planned as the single DOD electronic mall, and the National Institutes of Health Intramall program. The private sector also offers on- line procurement Web sites that support government buying activities. Beyond its on-line procurement programs, the federal government also supports electronic commerce by sponsoring programs that provide electronic commerce education to businesses. For example, each of the four federally funded business assistance programs that you asked us to review provides electronic commerce education as part of its operations. Each program also uses nonfederal organizations such as nonprofit organizations or contractors to perform its education services. However, as shown in table 3, the programs differ in focus and the target clients served. The small business share of federal procurement dollars awarded through three on-line procurement sites was higher than the governmentwide small business share, as reported by FPDS, the central repository of governmentwide procurement data. However, obstacles to conducting electronic business with the federal government continue to be cited by organizations representing or working with small businesses and business assistance program officials. Some of these obstacles relate to the general readiness of small businesses to conduct electronic commerce while others are specific to how the government has implemented electronic procurement activities. The government has taken, or plans to take, actions that are expected to address some of the government-specific obstacles. As figures 1 and 2 illustrate, the share of procurement dollars awarded to small businesses through the three on-line programs in fiscal years 2000 and 1999, respectively, was greater than their governmentwide share, as reported by FPDS. These on-line procurement programs also exceeded the governmentwide goal of a 23-percent share for small businesses. Most of the contract awards made through DMLSS E-CAT and GSA Advantage! were small, which may at least partially account for the relatively large share of dollars awarded to small businesses in these programs. Small businesses generally obtain a greater percentage of contract awards of $25,000 or less (e.g., 43 percent for non-credit-card awards in fiscal year 2000), and, in fiscal year 2000, 91 percent of DMLSS E-CAT awards and 93 percent of GSA Advantage! awards were $25,000 or less. (Only 3 percent of ITSS awards were $25,000 or less.) Although small businesses received a higher share of awards in the three on-line procurement programs than the governmentwide share, some small businesses still face reported obstacles to successfully participating in on-line government purchasing activities. Obstacles reported generally fall into two categories: (1) those relating to general readiness—the willingness and ability of small businesses to conduct business electronically and (2) those specific to conducting procurements electronically with the federal government. Table 4 lists the reported obstacles by category. While these obstacles were reported in the context of small businesses, some—such as security and privacy—also apply to all businesses. As the relatively large small-business share of awards made through the three federal on-line procurement programs shows, some small businesses are overcoming these reported obstacles. Still, as the federal government continues to implement electronic procurement initiatives, it is essential that it consider the obstacles that some small businesses face and work to implement solutions that address these obstacles. Small businesses, in turn, must act to develop, maintain, operate, and evolve effective Web- based approaches to improve the likelihood of their successfully conducting business with the government. Appendix II provides additional information on these reported obstacles and various government actions being taken to address some of them. An example of such an action is GSA’s Federal Business Opportunities (FedBizOpps) Web site, which has been designated the single governmentwide point of electronic entry on the Internet where vendors can access all the information they need to bid on available government business opportunities greater than $25,000. Each of the four federally funded business assistance programs in our review provided electronic commerce education as part of its operations, although the level of involvement varied. Three of these business assistance programs are oriented toward management issues and addressed electronic commerce as only one part of their responsibilities.In contrast, the fourth program, ECRC, focused entirely on electronic commerce. The ECRC program was terminated September 30, 2001. While coordination at the headquarters level for these programs was limited, the local offices generally coordinated their various electronic commerce activities. Although officials from the three management-oriented programs stated that they expect local offices to address electronic commerce issues, the standard agreements for these three programs do not require local entities to report performance metrics associated with electronic commerce. Accordingly, nationwide statistics on the electronic commerce education activities for the three management programs are not available. As a result, we contacted six local offices for each of these programs to determine whether they provided electronic commerce education. All but one of the local offices we contacted indicated that they offered electronic commerce education or assistance to their clients. Table 5 shows the types of electronic commerce assistance activities provided by the six local offices in each program we contacted. For example, local offices provided formal training as well as counseling or technical assistance to individual clients. Subjects covered by the three management-oriented programs’ local offices in their electronic commerce assistance activities are shown in table 6. These subjects ranged from general introductory material to technical or government-specific topics. According to local and regional office officials, offices tailor the types of topics offered to meet local and individual client needs. As for the ECRC program, each of the centers was required to make available a standard set of training courses that was centrally maintained. Standard training courses that ECRCs provided included introductory material as well as technical and DOD-specific courses. In fiscal year 2000, ECRCs reported providing 3,468 training courses with a total enrollment of 53,800 students of whom 37,968 were DOD staff and 15,832 were non-DOD staff, including business owners or employees (some of these may be multiple courses taken by the same client). Among non-DOD staff, the courses with the highest number of participants, accounting for about two- thirds of non-DOD training were Hypertext Markup Language (HTML), (2,987 non-DOD participants); Marketing on the Internet (2,907 non-DOD participants); Internet as a business platform, (1,772 non-DOD participants); Getting started with electronic commerce (1,620 non-DOD participants); Business opportunities with DOD through electronic data interchange (1,494 non-DOD participants) The six regional ECRCs we contacted also reported providing other types of electronic commerce education, such as one-on-one technical assistance, conference presentations, and on-line training in electronic commerce. The following examples illustrate how the four assistance programs helped businesses in the electronic commerce arena and also demonstrate the differences in approach between the more management-oriented SBDCs and MEPs and the more federally and technically oriented PTACs and ECRCs. An SBDC helped two high school students set up an Internet advertising business. The company is now incorporated, and the proprietors received the 2001 SBA Young Entrepreneur of the Year Award. A MEP helped a small cabinet manufacturer develop a complete marketing plan, introduced it to electronic business, and designed a company Web site. A PTAC helped clients with the on-line DOD central contractor registry and trained them on how to search FedBizOpps. An ECRC provided hands-on training on DLA bid boards and showed the client the award notification menu on one bid board that displayed a contract award to the client, issued 5 weeks earlier, of which the client had been unaware. The ECRC program was discontinued on September 30, 2001. Reaction to this decision at the local offices of the management-oriented programs was mixed—six were concerned about losing access to expertise or about not having the staff or resources to address issues handled by the ECRCs, while four did not have such concerns (most of the remaining eight offices did not express an opinion). According to DLA officials, materials for the ECRC training courses will be turned over to its PTAC program, which plans to make them available to local PTACs via downloads from a DLA Web site. Neither DLA’s Electronic Business Program Office nor its PTAC program plans to keep the course materials up to date. The four business assistance programs generally coordinated their efforts through, for example, referrals and jointly delivered training; however, such coordination occurred largely at the local level. At the headquarters level, there is no ongoing process for coordinating electronic commerce activities, although discussions on specific issues have taken place. In contrast, all but one of the local offices we contacted reported that they coordinated with at least one of the other programs. Coordination at the local level is important because each program has its own specific focus and may lack expertise found in the other programs. In one example, two ECRCs reported that they trained the local staffs of two of the management-oriented programs on selected electronic commerce issues. In other cases, ECRC staff provided electronic commerce training for the clients of these business assistance programs. Finally, in one other case, the regional rural area management-oriented business assistance offices met quarterly to determine the most appropriate program to address the clients’ needs. Table 7 indicates the types of coordination activities with one or more of the other programs that the local offices of each of the business assistance programs reported. While the local offices of the four programs generally coordinated their efforts, this coordination was not universal in that we found instances in which such coordination was not occurring. For example, in five cases, the local or regional official we spoke with was not familiar with one or more of the other business assistance programs. As the federal government’s electronic procurement presence grows, the participation of small businesses in this activity is critical if the government is to meet its small business procurement goals. Small businesses successfully obtained a relatively large share of federal procurement dollars in three specific on-line procurement programs, compared to the governmentwide share of federal procurements that were awarded to small businesses. At the same time, concerns about obstacles to small business participation in electronic procurements are still expressed in studies and surveys and by organizations representing and working with small businesses. These entities report that small businesses continue to face obstacles in conducting electronic procurements with the federal government, including a lack of (1) technical expertise and (2) knowledge about the government’s electronic procurement strategy. Key to the success of small businesses’ participation in government electronic procurements is that both parties—the government and the businesses themselves—continue to work on overcoming these and any future obstacles that may arise. The government has taken, or plans to take, actions that are expected to address some of these obstacles. In the larger electronic commerce arena, federally funded programs are providing assistance to businesses in a variety of ways. For four specific programs, this assistance included not only helping businesses with federal electronic procurements but also providing assistance in performing electronic commerce in the economy at large. The four business assistance programs in our review also were coordinating their activities at the local level. In oral comments on a draft of this report, officials representing GSA and the Office of Management and Budget’s Office of Federal Procurement Policy stated that they generally agreed with our report. In written comments, DLA and SBA also stated that they generally agreed with our report. DLA submitted technical corrections, which have been included in the report. In written comments, the Department of Commerce provided updated online sales statistics and stated that they believed the services provided by the Electronic Commerce Resource Centers should be continued. SBA also included information on their electronic government vision. The written comments we received from DLA, SBA, and Commerce are reprinted in appendixes III and IV, respectively. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days. At that point, copies of this report will be sent to the Chairman, Senate Committee on Small Business and Entrepreneurship; Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; Chairman and Ranking Minority Member, House Committee on Small Business; Chairman and Ranking Minority Member, House Committee on Government Reform; Chairman, House Subcommittee on Technology and Procurement Policy, Committee on Government Reform; and other interested congressional committees. We are also sending copies to the Secretaries of Defense and Commerce, the Administrators of the General Services Administration and the Small Business Administration, and the Director of the Office of Management and Budget and other interested parties. We will also make copies available to others upon request. If you have any questions on matters discussed in this report, please contact David McClure at (202) 512-6257 or David Cooper at (202) 512- 4587 or by email at mcclured@gao.gov and cooperd@gao.gov, respectively. Other contacts and key contributors to this report are listed in appendix V. To determine the small business share of procurement dollars awarded by three on-line procurement programs (GSA Advantage!, ITSS, and DMLSS E-CAT) and the small business share of all federal contract dollars awarded, we obtained fiscal year 1999 and 2000 award data for these programs and interviewed applicable GSA, DLA, and contractor officials. We calculated the percentage of small business awards to total awards for each program and compared them to the governmentwide small business share, which we calculated based on the aggregate FPDS data reported in GSA’s Federal Procurement Report for fiscal years 1999 and 2000. We assessed the reliability of the GSA Advantage!, ITSS, and DMLSS E-CAT data by (1) performing electronic tests of relevant fields (for example, we tested for completeness by checking key fields for missing data and checked for accuracy and reasonableness by examining summary statistics for values that were in proper and expected ranges) and (2) requesting and reviewing, if available, related program and system design documentation, audit and system reviews, and reports. The results of our assessment showed that the DMLSS E-CAT data were reliable enough for use in this report. However, the results of our assessment of the GSA Advantage! and ITSS data were inconclusive in large part because of concerns related to limitations on available documentation and security weaknesses reported in GSA’s Fiscal Year 2000 Annual Report. Nevertheless, we determined that the reliability of the data provided is adequate for the comparative purposes of this report. We will be providing additional information on the GSA Advantage! and ITSS document limitations in a separate letter. To identify what, if any, obstacles exist for small businesses in conducting electronic procurements with the federal government, we performed a literature search. We also interviewed selected SBDCs, PTACs, ECRCs, and MEPs about their clients’ experiences with obstacles and officials from SBA’s Office of Advocacy and Office of Government Contracting. In addition, we obtained comments from organizations representing or working with small businesses to obtain their members’ views on obstacles. The following are the organizations that provided information on small business obstacles: Association of Government Marketing Assistance Specialists Coalition for Government Procurement Contract Services Association of America National Black Chamber of Commerce National Small Business United U.S. Pan Asian American Chamber of Commerce Small Business Legislative Council U.S. Hispanic Chamber of Commerce We contacted 13 other organizations, such as the U.S. Chamber of Commerce and the National Women’s Business Council, but they did not provide us with any information on obstacles small businesses had in performing electronic procurements with the federal government. In addition, to review what steps four federal business assistance programs have taken to educate businesses on electronic commerce and the extent to which they have coordinated their efforts, we interviewed headquarters staff of the programs and reviewed applicable program documents, such as grant and cooperative agreements and contracts. We also interviewed officials from 24 local and regional offices of these programs and obtained and reviewed available documentation from these offices. We judgmentally selected six offices from each program based on the following: For each program, we chose at least one office from each of the four U.S. census regions. Overall, we chose at least two local offices from each census Division. The census divides the United States into four regions and nine divisions—Northeast region (New England and Middle Atlantic divisions), Midwest region (West North Central and East North Central divisions), South region (West South Central, East South Central, and South Atlantic divisions), and the West region (Pacific and Mountain divisions). For each program except ECRCs, we chose at least two offices serving less populous areas, based on the Office of Management and Budget’s classification of a metropolitan area. Based on the above criteria, we interviewed officials from the following offices: ECRCs Bremerton, WA Cleveland, OH Dallas, TX Fairfax, VA Scranton, PA MEPs Arkansas Manufacturing Extension Network California Manufacturing Technology Center Idaho Techhelp Iowa MEP Maine MEP Maryland Technology Center PTACs Alabama Small Business Development Consortium California Central Valley Contract Procurement Center Minnesota Project Innovation National Center for American Indian Enterprise Development New Hampshire Office of Business & Industrial Development George Mason University Procurement Technical Assistance Program SBDCs Bronx SBDC of Lehman College Danville Area SBDC (Illinois) Joplin SBDC (Missouri) Northern Virginia SBDC Western Kentucky University SBDC Wyoming SBDC, Region 2 We performed our work at SBA headquarters in Washington, DC, GSA offices in Crystal City, VA, and Washington, DC; DLA headquarters at Fort Belvoir in VA; Defense Supply Center, Philadelphia; NIST in Gaithersburg, MD; and the offices of business assistance providers and business organizations in Maryland, Virginia, and Washington, DC. We conducted our review between January and August 2001 in accordance with generally accepted government auditing standards. Obstacles reported by various studies and surveys as well as from comments provided by officials in selected federal business assistance programs and organizations representing or working with small businesses generally fall into two categories: (1) those related to general readiness—the willingness and ability of small businesses to conduct business electronically and (2) those specific to conducting procurements electronically with the federal government. Commonly cited obstacles for small businesses in this category include the following. Need to Make a Business Case. Our literature search and discussions with industry groups and business assistance program officials indicated that some small businesses may have difficulty in making a business case for adopting electronic commerce because of their inability to ascertain costs, benefits, and risks. They may have little working knowledge of the Internet and other electronic commerce technologies and insufficient information about the benefits and applicable implementation strategies appropriate for their business models. As a result, such businesses may be reluctant to make the investment to implement electronic commerce. For example, an August 2000 survey of 50 Idaho manufacturers’ use of Internet technologies showed that of the 23 respondents with Web sites, 74 percent were not engaged in electronic commerce. The primary reasons companies with Web sites cited for not moving to electronic commerce were a lack of knowledge and a concern that implementation was too time-consuming and costly. One researcher concluded that for small businesses, adopting electronic commerce requires low, predictable cost; minimal changes in employee behavior; and compelling benefits over alternatives. Limited Technical Expertise. A June 2000 Organization for Economic Co- operation and Development report on enhancing the competitiveness of small and medium-sized enterprises noted that many small businesses do not know how to profitably develop their electronic commerce capabilities or how to cope with the “complex rules” governing this area. This report and other studies point out that the lack of appropriate human resources, in terms of technical and/or managerial staff familiar with the information technology environment, constitutes a major barrier for small businesses wanting to adopt electronic commerce technologies and strategies.Business assistance program officials also noted that their small business clients lack the skill sets necessary to participate in electronic commerce. They stated that small businesses need help with building Web sites, selecting Web site designers and Internet service providers, and integrating electronic commerce into their business processes. However, small businesses may not have such experience and expertise on staff and may not be able to afford to recruit and retain technical staff with these skills. Internet Access Issues. PTAC, MEP, ECRC, and SBDC business assistance program officials reported that small businesses, particularly in rural areas and on Indian reservations, have difficulty obtaining affordable high-speed Internet access sufficient for electronic commerce activities. For example, a PTAC official in a rural state said that many individuals and companies in his state have only dial-up modem service. Moreover, according to an official working on programs to assist American Indian enterprise development, reservations often lack Internet infrastructure. She estimated that only 40 percent of her clients on reservations have e-mail service. The continuing expansion of electronic commerce requires widespread high-speed Internet access. However, as we noted in February 2001, there is less availability of broadband high-speed, high-capacity connection to the Internet in the rural areas of America. Similar to other studies, our survey found the availability of broadband technology to be most prevalent in large metropolitan areas. Concerns About Security and/or Privacy. Ensuring the security of payments and proprietary information and privacy of personal data are a top priority for small businesses considering electronic commerce as a means to sell their products and services. According to the U.S. presentation before the Free Trade Area of the Americas electronic commerce committee, because of their small size and limited financial resources, small businesses may not be prepared to take on the kinds of security and privacy risks that larger companies can more easily face.Security and privacy concerns of small businesses include inappropriate disclosure of proprietary business information that governments collect from companies, consumer fraud, and the adequacy of security over a transaction on the Internet. For example, some small businesses fear bidding on-line because they do not believe that it is secure. They want assurances that their pricing and other proprietary information would be accessed only by intended recipients and not by competitors. These concerns are not unjustified. For example, we have designated information security a governmentwide high-risk area since 1997. Our latest high-risk report noted that progress in strengthening federal information security has been mixed. Commonly cited obstacles in this category include the following. Monitoring Various Federal Procurement Information Web Sites for Business Opportunities. The federal government has multiple Web sites that list contracting opportunities and related procurement information that businesses need for deciding whether to pursue a business opportunity. For example, an August 2001 search for federal “contracting opportunities” on www.firstgov.gov—the federal government’s portal for accessing government on-line information—provided links to over 1,000 Web sites listing procurement opportunities and related information. Among the first 10 “hits” were links to sites with information on contracting opportunities for the Departments of Housing and Urban Development, State, and Transportation, the Army Corps of Engineers, and GSA. Organizations representing or working with small businesses point out that small companies with limited resources and staff cannot afford to spend several hours a day “surfing the Net” for potential work. To help address this issue, a May 2001 Federal Acquisition Regulation change designates the FedBizOpps Web site as the single governmentwide point of electronic entry on the Internet where vendors can access all the information they need to bid on available government business opportunities greater than $25,000. After subscribing, vendors can receive various announcements automatically via email, including solicitations and post-award notices. Agencies must provide access to all applicable actions by October 1, 2001. Because the requirement to use FedBizOpps is new, its impact on simplifying access to the government’s procurements is not yet known. Moreover, information about contracting opportunities expected to be $25,000 or less does not have to be posted on FedBizOpps. As noted earlier, small businesses generally obtain a significantly higher share of these contract opportunities. Differing Requirements for On-line Purchasing Programs. The federal government has multiple on-line purchasing programs that federal buyers can access to search vendor catalogs and purchase goods and services from suppliers with government contracts. According to three business assistance program officials, the process for posting listings on these sites is inconsistent and time-consuming because vendors may have to upload their electronic catalogs to multiple sites, involving different formats and procedures. For example, the GSA Advantage! and DMLSS E-CAT programs have different requirements for formatting catalog data. An industry group representing companies that conduct business with the federal government told us that small businesses often must hire third- party service providers because they lack the ability to manage multiple electronic catalog formats, revisions, and uploads. Moreover, according to one research report, some commodity suppliers may perceive an on-line catalog to be impractical, due to the sheer number of their products and the complexity of their pricing. As of mid-August, GSA Advantage!, DMLSS E-CAT, and others were in the initial stages of considering implementing a single catalog process for medical materiel. Lack of a Single Vendor Registration System. Vendors who want to conduct business with more than one government office generally must complete multiple registrations and profiles, providing redundant business information to each site in different formats. Officials from several business assistance programs and organizations representing small businesses spoke of the need for the government to set up a single point of vendor registration. Many reiterated the point made in a 1994 government report on electronic commerce that it is much easier for a business to maintain its single repository of registration information than to submit the same information or some variation of it many times to numerous contracting activities. Moreover, the Federal Acquisition Streamlining Act of 1994 required the establishment of a “single face to industry” for conducting procurements. To help address concerns about multiple vendor registrations, DOD developed a centralized, electronic registration process—the Central Contractor Registration (CCR) system—as the single registration point for vendors that want to conduct business with DOD. As part of its efforts to expand electronic government, the Administration has tasked agencies in fiscal year 2003 to use the CCR as the single validated source of data on vendors interested in contracting with the government. According to an OMB official, the governmentwide single point of vendor registration should help to standardize the registration process, eliminate redundancies, and provide a common method of gathering and reporting vendor information. Even if a single governmentwide registration system is implemented, small businesses may still wish to register on SBA’s Procurement Marketing and Access Network (PRO-Net), that is an Internet-based database of information on thousands of small businesses which federal buyers can use to search for small businesses fitting specific profiles. According to a DLA official, SBA’s PRO-Net was provided access to CCR small business vendor information data on August 24, 2001. SBA officials told us that they did not yet know how they were going to use the CCR data but that vendors cannot be automatically registered in PRO-Net without their consent. Accordingly, small businesses wanting to register in both CCR and PRO-Net will have to reenter some of the same information in both systems. Problems Related to Technical Data and Drawings. Posting technical data and drawings (required by businesses preparing bids) on the Web or otherwise making them available electronically is beneficial because vendors do not have to visit contracting offices to obtain copies or have technical data packages mailed to them. However, business assistance program officials and industry groups voiced concerns about the difficulties, frustration, and time involved in locating, transmitting, downloading, and printing on-line specifications and drawings. Some of the problems reported included incomplete and inadequate technical data packages for manufactured items, on-line manuals that are difficult to decipher and use, out-of-date drawings, or the lack of availability of CD- ROMs containing drawings that are too large to download. A representative from one trade organization noted that there can be technical problems with downloading specifications in that often a fast Internet connection and powerful computer system are needed, and the software versions required by different agencies may differ or conflict with one another. ECRC and PTAC officials said that many agencies fail to recognize that small businesses have limited electronic resources and need more simplification and software standardization for on-line solicitation materials to be readily accessible. In a mid-August meeting, DLA officials agreed that the quality of electronic technical data and drawings and the delivery of this information were problems. Difficulty in Obtaining Help With Problems and Marketing Assistance. Another obstacle for many small businesses attempting to participate in on-line government purchasing programs is not knowing where to go for help or not having knowledgeable contacts. According to officials of several business assistance programs and trade association representatives, small businesses often have difficulty reaching someone at the buyer’s or program office who is able and willing to help, particularly with technology-related problems and/or marketing questions. For example, one trade organization representative said that small businesses trying to market in an on-line environment have problems reaching federal procurement officials to discuss their products and services. When they call to arrange meetings with buyers, they may be referred instead to Web sites, which can be complex and confusing and may not contain the information they really need. In other cases, phone calls and e-mails were not returned when there was a problem. In particular, two industry groups and five business-assistance program officials mentioned difficulties in obtaining assistance to deal with problems associated with GSA Advantage!. For example, one ECRC official said that the GSA Advantage! Web site explanations are insufficient to address vendor questions and GSA technical support staff are also unable to answer questions from vendors about getting their products listed. In mid-August, GSA officials stated that improvements in GSA Advantage! vendor support and assistance were made in the spring and summer of 2001, such as increasing help-desk staffing, employing classroom training, and implementing a lab in which vendors are helped in loading their data onto the system. In earlier testimony on electronic government initiatives, we pointed out that the government’s use of Internet and Web-based technologies should force organizations to reconsider their customers—specifically, how their customers need, perceive, and digest information and services in a viewable, electronic format. Moreover, the National Electronic Commerce Coordinating Council suggests that organizations implement a customer relations management structure. Uncertainty About the Government’s Electronic Procurement Strategy. Industry groups and business assistance program officials told us that since government agencies are pursuing different approaches to implementing electronic purchasing, small businesses hesitate to invest in any one electronic commerce system. According to one PTAC program official, when businesses look closely at their government customers’ electronic commerce capabilities, they find a “very mixed bag.” In addition, officials in four of the six ECRC offices we contacted noted that the government has pursued many different electronic commerce solutions and has not adopted a uniform “single face” approach to the vendor community. ECRC officials cited the government’s Federal Acquisition Computer Network—better known as FACNET—and electronic data interchange initiatives as examples of electronic commerce initiatives that were not fully implemented or were changed before investment returns were realized. For example, in our 1997 report on FACNET implementation, we discussed the limited use of FACNET by government agencies and the need for a coherent strategy and implementation approach for carrying out the agencies’ acquisition requirements using various electronic commerce technologies and purchasing methods. Barbara Johnson, Rosa Johnson, Beverly Ross, Patricia Slocum, and Glenn Spiegel made key contributions to this report.
The federal government has been pursuing electronic initiatives to strengthen its buying processes, reduce costs, and create a competitive "virtual" marketplace. Small businesses, however, may have difficulty participating in federal on-line procurement programs. Furthermore, the government's business outreach and education programs related to electronic commerce may not be adequately coordinated. For the three federal on-line procurement programs GAO reviewed, the dollar share of awards to small businesses exceeded the overall small business share of total federal contract dollars awarded in fiscal years 2000 and 1999. Although small businesses successfully participated in these three programs, they still face obstacles in conducting electronic procurements with the government. The federal government is taking steps to address some of these obstacles, such as implementing a single point of entry on the Internet for vendors to access information on available government business opportunities greater than $25,000. Each of the four business assistance programs GAO examined had taken steps to educate its clients on electronic commerce as part of its operations. However, GAO could not fully determine the extent of these activities because they are conducted by hundreds of local and regional offices, and only one of the programs collected performance metrics specific to electronic commerce.
The Army has completed its drawdown of active forces in accordance with the Bottom-Up Review (BUR) force structure and defense guidance calling for a force of 495,000. To ensure that the Army will be able to maintain the minimum strength necessary to successfully respond to two nearly simultaneous major regional conflicts (MRC), Congress established a permanent legislative end strength floor of 495,000 in its fiscal year 1996 National Defense Authorization Act. However, the Department of Defense’s (DOD) fiscal year 1997 Future Years Defense Program (FYDP)reduced active Army end strength 20,000 below the congressionally mandated floor by 1999. A key impetus behind this plan is the concern within the Office of the Secretary of Defense (OSD) that funding the existing active Army force level of 495,000 will prevent the Army from buying the new equipment it needs to modernize the active force for the 21st century. The BUR strategy called for a force of 10 active Army combat divisions and 2 active armored cavalry regiments to fight and win 2 nearly simultaneous MRCs. This force was far smaller than the Cold War Army, which comprised 18 active divisions and 770,000 personnel in fiscal year 1989, as well as the Base Force, which in fiscal year 1994, consisted of 12 active combat divisions and 540,000 active personnel. Following the BUR, the Army reorganized its active combat division structure. Two division headquarters were eliminated, thus reducing the number of active divisions from 12 to 10 as specified in the BUR. Another significant change was that the Army discontinued its reliance on reserve component “round-up” or “round-out” units to bring the active divisions to full combat strength for wartime deployment. Instead, the Army determined that each of the remaining 10 combat divisions would comprise 3 fully active ground maneuver brigades. This decision was endorsed by the Secretary of Defense during development of the BUR out of concern that relying on reserve brigades could slow down a U.S. response to aggression. Therefore, as a result of the BUR, only two active maneuver brigades were eliminated from Army force structure— 12 combat divisions with a combined total of 32 active brigades were reduced to 10 divisions with 30 active brigades. Also, the Army decided that all 10 remaining divisions would be authorized 100 percent of their wartime military personnel requirement. Overall, the reduction in forces, when combined with the force reductions resulting from the withdrawal of 20,000 military personnel from Europe between fiscal years 1994 and 1995, brought the force level down to within 10,000 of the fiscal year 1996 end strength goal of 495,000. The remaining personnel reductions came from the institutional portions of the active Army. No cuts were made in “non-divisional” level support forces that would deploy with combat divisions, since the Army had previously found that support shortages already existed in these forces. A comparison of fiscal years 1994 and 1996 active Army force structure is shown in table 1.1. The active Army force of 495,000 is comprised of both deployable and nondeployable forces. The deployable force (63 percent) includes the combat divisions, separate brigades, armored cavalry regiments, and special forces groups, as well as the Corps level combat support and combat service support forces that would accompany them to the war fight. Taken together, these deployable operational forces are organized according to Army Tables of Organization and Equipment (TOE) and are commonly referred to as TOE forces. Combat forces are referred to as “above-the-line” TOE, and combat support/combat service support forces are referred to as “below-the-line” TOE. Combat support includes such specialties as engineering, military intelligence, chemical, and military police, while combat service support includes specialties such as transportation, medical, finance, quartermaster, and ordnance. The generally nondeployable portion of the Army (historically about 25 percent) is often referred to as the “institutional” force that supports the Army infrastructure by performing such functions as training, doctrine development, base operations, supply, and maintenance. These forces are organized according to Army Tables of Distribution and Allowances (TDA) and are simply referred to as TDA forces. Another 12 percent of the active Army force is in a temporary status at any given time and is referred to as “trainees, transients, holdees and students” or TTHS. These forces are also considered to be nondeployable. Historically, the percentages of the active force devoted to TOE, TDA, and TTHS have remained relatively constant. (See fig. 1.1.) The Army uses different resourcing processes for each portion of the active Army (see table 1.2). Defense guidance specifies the number of active divisions the Army must have in its structure. The elements of these divisions are sized according to Army doctrine. The Army’s 10 divisions range in size from 10,000 to 15,000 active personnel, depending on mission (e.g., light and heavy) and type of equipment. The Army uses a biennial process known as the Total Army Analysis (TAA) to determine the number of support units needed to support these combat forces, and how available personnel authorizations will be allocated to these requirements. TDA resources are allocated in a separate resource management process, primarily driven by the Army major commands but subject to some Department of the Army headquarters oversight. TTHS is essentially an allocation rather than a managed resource, although Army policy decisions can influence its size. TAA determines the number and types of support units needed to support war-fighting missions, regardless of whether active or reserve positions would be used to meet these requirements. The process then allocates forces from the active Army, the Army National Guard, and the Army Reserve to fill those requirements. The results of TAA 2003 were reported in January 1996 and fed into the 1998-2003 Army Program Objective Memorandum. A detailed discussion of the TAA process, assumptions, and results can be found in chapter 2. Chapter 3 discusses the TDA requirements process. Although Congress established a permanent active Army end strength floor of 495,000 in the National Defense Authorization Act for Fiscal Year 1996, DOD’s fiscal year 1997 FYDP reduced active Army end strength below this level beginning in fiscal year 1998. Congress established a permanent end strength floor to ensure that each service, including the Army, had the minimum force necessary to fulfill the national military strategy. However, DOD may reduce forces below the floor if it notifies Congress and may also increase authorized end strength as much as 1 percent in any given fiscal year. According to the 1997 FYDP, DOD intends to keep Army military personnel appropriation dollars relatively flat from fiscal years 1995 to 2001. Because these appropriations will not sustain a force level of 495,000, DOD planned to reduce the Army’s end strength by 10,000 in fiscal year 1998 and an additional 10,000 in fiscal year 1999. DOD’s 1997 FYDP increases the percentage of the Army budget devoted to procurement from 10 percent in 1995 to 16 percent by 2001. This increase is consistent with DOD’s view that modernization is key to long-term readiness. In his March 1996 testimony, the Secretary of Defense said that in recent years, DOD had taken advantage of the drawdown and slowed modernization in order to fully fund those expenditures that guarantee near-term readiness, such as spare parts, training, and maintenance. As a result, modernization funding in fiscal year 1997 was said to be the lowest it had been in many years, about one-third of what it was in fiscal year 1985. To reverse this trend, DOD plans to increase funding to procure new equipment, including funding for “everyday equipment” ground forces needed in the field, such as tactical communications gear, trucks, and armored personnel carriers. Likewise, the Chairman, Joint Chiefs of Staff has expressed concern about the future readiness of Army forces given reduced levels of modernization funding. As required by the National Defense Authorization Act for Fiscal Year 1996, we reviewed (1) the extent to which TAA 2003 resulted in sufficient combat support/combat service support force structure to meet the support requirements of the two-MRC scenario and also operations other than war (OOTW), (2) whether the Army’s streamlining initiatives have identified opportunities to further reduce Army personnel resources devoted to institutional Army (TDA) functions, and (3) the feasibility of further reducing active Army end strength. In conducting our assessment, we did not examine DOD’s rationale for requiring 10 active combat divisions or the Army’s rationale for using three full active brigades per division instead of round-out or round-up reserve brigades. We also did not fully assess ongoing studies concerning the future use of reserve forces or analyze potential changes to the current national military strategy. Since much of the Army’s analysis in TAA 2003 is based on the combat forces assigned to it by the BUR and the then current defense planning strategy, any changes in this guidance would likely alter Army support force requirements. To determine the extent to which TAA 2003 resulted in sufficient combat support/combat service support force structure to support the two-MRC scenario and OOTWs, we reviewed the Army’s documentation on TAA processes, assumptions, and results. We interviewed Army officials at Department of the Army Headquarters, Washington, D.C.; Concepts Analysis Agency, Bethesda, Maryland; U.S. Forces Command, Fort McPherson, Georgia; and U.S. Army Training and Doctrine Command (TRADOC), Fort Monroe, Virginia, and Fort Leavenworth, Kansas. Our review of TAA 2003 included analyses of the risks associated with the number and type of active and reserve support forces allocated to support war-fighting requirements; how the Army’s assumptions compared to those in defense guidance, previous TAAs, or used in other DOD, Army, or external defense studies; and how the major assumptions used in TAA can affect force structure outcomes (including measures of risk). We also examined TAA processes to determine if the Army (1) obtained adequate participation by stakeholders in the process, including major commands and commanders in chief (CINC) and (2) scrutinized data inputs used in its war-fight models to determine if they were free from error. In addition, we discussed TAA 2003 results and methodology with OSD officials. Further, to better understand how the requirements of the joint war-fighting commands are considered in the TAA process and how CINCs are affected by TAA results, we requested information and received formal responses from the CINCs of the U.S. Atlantic Command, the U.S. Central Command, the U.S. European Command, and the U.S. Pacific Command. To assess Army streamlining initiatives and their potential for reducing military personnel devoted to institutional Army functions, we obtained documentation and held discussions with officials from the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs; the Army’s Office of Program Analysis and Evaluation; the Army Budget Office; Department of the Army Headquarters; the U.S. Army Force Management Support Agency, Washington, D.C.; U.S. Army Forces Command, Fort McPherson, Georgia; U.S. TRADOC, Fort Monroe, Virginia, and Fort Leavenworth, Kansas; the U.S. Army Medical Command, Fort Sam Houston, Texas; and the U.S. Army Materiel Command’s Management Engineering Activity, Huntsville, Alabama. We reviewed major commands’ TDA requirements processes and discussed proposals for increased use of workload-based management to assess the TDA requirements determination process. To assess TDA streamlining, we identified and reviewed Army streamlining studies, including Force XXI, major command reengineering, and Army headquarters policy initiatives that resulted in reductions in military and civilian resources, as well as budgetary savings. We also assessed limitations to further streamlining of the TDA force due to legal, cultural, and operational requirements. We did not review the justification for TDA positions that are required by law or controlled by other agencies. To assess the implications of DOD’s planned reduction in active Army end strength, we examined the objectives and implementing guidance for the Army’s Force XXI campaign,which DOD cited as justification for the reduction, and the personnel reductions realized or anticipated as a result of these initiatives. We also considered OSD’s internal assessment of the Army’s TAA 2003 process and the potential for changes in defense strategy resulting from the Quadrennial Defense Review. Lastly, we considered the current status of TDA streamlining and the results of TAA 2003. DOD provided written comments on a draft of this report. These comments are discussed and evaluated in chapters 2 and 3 and are reprinted in appendix V. Additional comments from the Army are discussed and evaluated in chapter 4. We conducted our review from September 1995 to October 1996 in accordance with generally accepted government auditing standards. The Army believes that it can provide support forces for two MRCs at a moderate level of risk. However, in assessing risk, the Army found that 42 percent of all support forces required in the first 30 days of the first MRC would be late arriving to theater because they cannot mobilize and deploy in time. The Army also found that it would have very few active support forces available to send to the second MRC—only 12 percent of the total support forces needed. In addition, the Army did not authorize 19,200 positions that are needed to bring some existing units up to their full required strength. Finally, units totaling 58,400 positions were not authorized any personnel because the Army’s total wartime support requirement exceeds available personnel authorizations. The Army’s risk assessment depends largely on the assumptions and model inputs that were adopted for TAA 2003. Some of these assumptions were favorable in that they minimized risks to U.S. forces. For example, to be consistent with defense guidance, TAA assumed that U.S. forces had immediate access to ports and airfields in the theater of operations, faced limited chemical attacks, and were immediately available for redeployment if previously committed to OOTWs. Less optimistic assumptions would have led to higher support requirements. On the other hand, the Army did not consider all available resources to satisfy its unmet support force requirements, such as some support force capabilities that currently reside in the Army’s eight National Guard divisions and the TDA force, and support available from outside contractors and defense civilians. Also, while TAA is an analytically rigorous process, some aspects of its methodology could be improved. For example, TAA lacks mechanisms for adjusting to change during its 2-year cycle; some model inputs, such as consumption of fuel and water, were not sufficiently scrutinized; and sensitivity analyses were generally not used to measure the impact of alternative assumptions and resourcing decisions on risk. Changes to any of the key assumptions or other model inputs could produce significantly different force structure requirements than those determined in TAA 2003, and potentially different risk levels. Based on defense guidance, other Army guidance and inputs, wargaming assumptions, unit allocation rules, and logistical data, TAA determines the number and type of support units the Army needs to execute the national military strategy. TAA then allocates Army personnel authorizations, both active and reserve, among these support force requirements to minimize war-fighting risk. TAA is an advance planning tool that tries to anticipate potential war-fighting scenarios and personnel availability approximately 9 years in the future. TAA consists of a series of campaign simulation models and force structure conferences attended by representatives from key Army staff offices and commands, as well as the unified commands. A strategic mobility analysis is performed to determine the arrival times of Army forces in theater and identify shortfalls. This is followed by a theater campaign analysis to gauge force movement and unit strength over time, as well as personnel and equipment losses. Outputs from these models, along with approved unit allocation rules and logistics data, are input into the final Army model, Force Analysis Simulation of Theater Administration and Logistics Support. This model generates the required support forces by type and quantity, and specifies when they are needed in theater and what their supply requirements would be. The support forces identified by the model are then matched to actual Army support units. At this point, priorities are established among the competing requirements, and approaches are discussed to mitigate the risks of unmet requirements. One approach has been to authorize fewer personnel to some units than are required to meet their full wartime requirement. Additionally, the active/reserve force mix is examined on a branch by branch basis to assess whether sufficient active forces are available to meet early deployment requirements. The approved force structure is forwarded to the Army’s Chief of Staff for final approval as the base force for programming Army resources for the next Program Objective Memorandum. A more detailed description of the Army’s TAA process is provided in appendix I. The Army concluded that its authorized support forces, resulting from TAA 2003, were consistent with the moderate risk force delineated in the October 1993 BUR. This force, among other things, must be able to fight and win two MRCs that occur nearly simultaneously. To assess the risk level associated with its support forces, the Army employed four measures: late risk, second MRC risk, unmet requirements risk, and casualty risk. Each of the risks was quantified; however, their collective impact on the war fight was not modeled by the Army. Rather, the Army’s overall assessment of moderate risk is based on military judgment. TAA stipulates that support units needed in the first 30 days of the first MRC should be drawn from the active force because of the time needed to mobilize, train, and deploy reserve units. This is consistent with defense guidance. However, TAA 2003 found that about 79,000 of the more than 188,000 support force positions required in the first 30 days of the first MRC do not arrive on time because the Army lacks sufficient numbers of active support forces to meet these requirements and must rely on reserve forces instead. This represents 30 percent of the 260,000 total authorized Army force needed during this time period, and 42 percent of the Army support forces required. Branches with the most late arrivals include engineering, transportation, quartermaster, and medical—branches with high concentrations of reserve personnel. This risk is exacerbated when the Army relies on reserve forces during the first 7 days of the war fight. Almost one-quarter of the reserve support forces assigned to meet requirements during the first 30 days (19,200 positions) are needed in the first 7 days of the MRC. The 30-day time frame to mobilize and deploy reserve support forces is substantiated in classified studies by the RAND Corporation that examined the availability of reserve forces and by Army officials responsible for reserve mobilization activities. The Army estimates that mobilizing reserve forces, from unit recall to arrival at the port of embarkation, takes about 15 days for a small support unit and 31 days for a large unit. Personnel may be transported by air, but their equipment likely will be shipped by sea. Depending on whether the equipment sails from the east or west coast and to which theater, it will take an additional 12 to 30 days to arrive, unload, and assemble the equipment. Therefore, a small reserve unit will be available for the war fight no earlier than 27 days after call-up, and a large reserve unit will require at least 43 days. (See app. II for a listing of mobilization tasks and the time required to complete them.) OSD officials believe that if it were possible to reduce late risk by making more active forces available during the first 30 days, strategic lift constraints would limit the number of active support forces that could be moved to theater. Army officials noted that to the extent that any active support personnel are available to replace late reservists and could be moved, the Army’s risk of late arrivals would be lower. The availability of active support forces for the second MRC was another risk measure used in TAA 2003. Specifically, as the availability of active forces declined—and with it a corresponding increased reliance on reserve forces—risk was assumed to increase. The second MRC will have access to relatively small numbers of active support forces, most of them having deployed already in support of the first MRC. Consequently, the Army must rely on reserve component forces to meet most of its requirements in the second MRC. Only 12 percent of the support forces needed in the second MRC are active, compared with 47 percent in the first MRC. Branches with low representation of active forces in the second MRC include engineer, transportation, quartermaster, and artillery. High reliance on reserves for use in the second MRC may not entail greater risk assuming there is adequate warning time and mobilization has already occurred. The same risk of late arrival would apply if mobilization was delayed. An objective of TAA is to allocate resources among competing support force requirements. In the case of TAA 2003, the Army’s force structure requirements initially exceeded its authorized positions by 144,000 positions. At the conclusion of TAA, units totaling 58,400 positions were not allocated any positions and exist only on paper, and other existing active units were allocated 19,200 fewer positions than needed to meet mission requirements. Table 2.1 illustrates the Army’s approach to allocating its resources in TAA 2003. Drawing from its active, National Guard, and Reserve forces, the Army identified 528,000 authorized TOE positions that it could apply to its 672,000 Army requirement to fight two MRCs, leaving an initial imbalance of 144,000 positions. The Army’s total TOE force is actually higher than 528,000 positions (see table 2.1), but some resources are excluded from consideration in TAA, such as the eight National Guard divisions the Army considers as a strategic hedge, and forces needed to perform unique mission requirements. The Army then analyzed all of its support forces at Corps level and above to determine how it could reduce the risk associated with its shortfall. This resulted in the Army shifting about 66,000 active and reserve positions from support units excess to the war fight to higher priority support units. Units providing fire fighting, engineering, and medical support were among those selected for conversion. After these conversions, the Army was left with a shortfall of about 78,000 positions. This shortfall was allocated as follows. Some existing active support units were authorized fewer positions than are needed to meet their full wartime requirement. In TAA 2003, these amounted to about 19,200 positions. The expectation is that these understrength units would be brought up to full strength before being mobilized. These additional personnel would come from the Individual Ready Reserve or new recruits. The remaining shortfall of 58,400 positions represents units that are needed to meet a wartime requirement but have not been allocated any position authorizations, that is, units that exist only on paper. Table 2.2 shows how each of the Army’s major support branches will be affected by the conversions and where the remaining 58,400 positions in vacant units reside. Among the branches benefiting most were quartermaster and transportation, which accounted for more than half of the initial shortfall in totally vacant units. Two additional actions were taken by the Army to mitigate the risk associated with its remaining unmet requirements. The Army estimates that host nations will be able to provide the equivalent of over 14,000 positions to offset some requirements, leaving a shortfall of about 44,000 positions in vacant units. The Army also plans to implement an option developed by the Army National Guard Division Redesign Study to convert 42,700 Army National Guard combat division positions to required support positions—eliminating most of the remaining vacant units. However, according to the study, these conversions will cost up to an additional $2.8 billion and could take many years to complete. The Army computes the number of casualties expected for each MRC as another measure of risk. Casualties are computed through a model that uses the Army’s full two-conflict requirement of 672,000, rather than the 528,000 authorized Army positions to meet that requirement. The number of casualties is a function of the population at risk, which is reflected in defense guidance; the wounded in action rate, which is calculated in the TAA modeling; and the disease, nonbattle injury rate, which is established by the Army Surgeon General. Campaign simulations generate the combatant battle casualties, which accounts for about 80 percent of all casualties. The remaining 20 percent are extended to support forces with algorithms. Variables that are considered in arriving at casualty estimates include the battlefield location (e.g., brigade area, division rear, and communications zone); intensity of the war fight (e.g., defend, attack, and delay); and the weapon systems involved. The Army uses a high-resolution model that pits individual weapon systems against one another to project equipment and personnel killed or injured for a multitude of platforms (e.g., 12 different types of tanks, light armored vehicles, and helicopters), according to their lethality under various conditions (e.g., moving, stationary, and exposed). Once the Army computes its casualties for each MRC, it does not increase its force requirements to provide casualty replacements. Otherwise, its personnel requirements would be much higher and shortfalls would be greater. The Army reasons that given the anticipated short duration of the MRCs, there will be little opportunity for significant replacements of individuals killed or otherwise unavailable for duty. However, if a need arose, individual replacements likely would be drawn from soldiers who had just completed their introductory training or by mobilizing the Individual Ready Reserve. Some of the assumptions and model inputs adopted for TAA 2003 lead to understated support force requirements. Without rerunning the theater campaign models with different assumptions and model inputs, the Army cannot determine the impact of changes in most of these assumptions, such as delaying the call-up of reserve forces on force requirements. However, some assumptions lend themselves to estimable force level equivalents, such as coalition support requirements. To the extent that less favorable assumptions would increase the Army’s support requirements, the risks associated with the current force may be higher than suggested by TAA 2003 results. During TAA, the Army used many key assumptions in modeling the two MRCs that were identical or similar to assumptions cited in the defense guidance then in effect. Some of these assumptions were favorable, that is, they tended to minimize risk to U.S. forces and objectives. These included: Immediate access to ports and airfields. TAA assumed that U.S. forces would have immediate, unobstructed access to ports and airfields in the theater of operation. An adverse case excursion was modeled in which immediate access to primary ports and airfields was denied in a one-MRC scenario. This excursion reflected a requirement for additional positions above that needed for two nearly simultaneous MRCs when it was assumed that immediate access would be available. Over 90 percent of this additional requirement was for transportation and quartermaster positions—positions already in short supply. However, in stating its requirements for TOE forces, the Army used the base case requirement of 672,000 positions. Timely decisions by the National Command Authorities. TAA assumed that the call-up of reserve forces coincided with the day U.S. forces deploy to the first MRC and that the activation of the Civil Reserve Air Fleet, civilian aircraft that augment the military in wartime, occurs early. For the reserve call-up to occur on the same day as the first deployment of U.S. forces assumes that it occurs at the earliest feasible opportunity. Limited chemical use. TAA assumed limited use of chemical weapons by enemy forces in each of the MRCs. Because of the constrained amount of chemical weapons modeled, some TAA participants did not believe the scenario provided a realistic representation. A more intensive chemical attack was modeled in a single MRC adverse case excursion. Results of this excursion indicated a requirement for additional support forces, but this is not reflected in the overall TAA base case requirement of 672,000 spaces. For example, casualties resulting from chemical attacks were not modeled in TAA 2003 to identify the medical support requirement. Changes to any of these assumptions would have resulted in higher force requirements than those determined in TAA 2003. However, rather than present a range of requirements to reflect the results of less favorable assumptions, the Army focused solely on the base case in arriving at the results of TAA 2003. A list of the key assumptions used in TAA 2003 is provided in appendix IV. Support force requirements would also have been higher had the Army not taken steps to eliminate some workload requirements from consideration in TAA. For example, no requirements were added to support coalition partners, although historically the Army has provided such support. OSD officials estimate that support to coalition partners would result in an additional requirement of from 6,500 to 20,000 spaces. Also, support force requirements were determined based on a steady state demand rate, which does not account for above average periods of demand. This approach, called smoothing, disregards the cumulative effect of work backlogs. Smoothing can be problematic for units whose resources are based on the amount of workload to be performed, such as transportation, fuel supply, and ammunition supply units. For example, fuel off-loaded onto a pier will remain on the pier until transportation is available to move it. With smoothing, this backlog of fuel is forgotten; no resources are applied toward it because the Army model does not take into account workload that was not performed previously. Rather, the model considers each time period during the operation as a discrete, independent event. The effects of smoothing tend to diminish over time. However, for relatively short wars, such as those envisioned in illustrative planning scenarios contained in defense guidance, the impact can be significant. For TAA 2003, the effect of smoothing understated the support force requirement by more than 28,000 positions, according to Army officials. The branches most affected by smoothing were transportation (more than 18,400 positions) and quartermaster (more than 3,800 positions), the two branches with the highest number of unmet requirements, smoothing notwithstanding. Army officials told us that the requirement for cargo transfer and truck companies during the first 30 days of the first MRC is almost twice as great (183 percent) when the requirement is not smoothed, and three times as great over the entire conflict. Since TAA 2003 requirements are based on the two-MRC scenario, some officials have questioned whether the Army has given adequate attention to the role of OOTWs in the post-Cold War period and the demands these operations place on Army forces. In particular, some DOD officials, including CINCs, have concerns that the Army has not adequately considered delays or degradation in capability resulting from the extraction of forces from an OOTW to an MRC, or to the potential demands on supporting forces resulting from multiple OOTWs. Despite these concerns, the Army has no plans to change its approach to OOTWs in the currently ongoing TAA 2005. Defense guidance directed the Army to base TAA 2003 requirements on either two nearly simultaneous MRCs or on one MRC and one OOTW, whichever produced the greater requirement. To make this assessment, the Army modeled the force structure requirements of four individual OOTW excursions using defense illustrative planning scenarios and supporting intelligence and threat analysis information. These included requirements for a peace enforcement, humanitarian assistance, peacekeeping, and a lesser regional contingency operation. Based on its modeling results, the Army concluded that requirements for one OOTW plus an MRC was less than the two-MRC war-fight requirement. In fact, the Army found that the aggregate support requirements of all four OOTWs were less than the support requirements for one MRC. Accordingly, the Army believes the needs of OOTWs can be satisfied by fulfilling the MRC requirements. The Army also observed that OOTWs could stress certain support specialties and used its excursion results to help “sharpen its assessment” of how Army resources should be allocated. For example, the Army conducted quick reaction analyses of the operational concept for employment and support of forces under the four defense planning OOTW scenarios. Among other results, these analyses identified a need for additional active Army support specialties, including transportation and quartermaster capability. The Army also found these specialties to be in short supply when it examined the impact of redeploying forces from an OOTW to an MRC. During OOTWs, the Army relies on active support forces and reserve volunteers, prior to a presidential call-up of reserve forces. To help mitigate this risk, Army officials told us they decided, to the extent possible, to redistribute resources during TAA to help overcome these key shortfalls. As shown on table 2.2, the Army shifted positions from other lower priority requirements to both the transportation and quartermaster branches in TAA 2003, although shortages remain and these branches are still heavily reliant on reserve forces. In the event the United States becomes involved in a major conflict, defense guidance assumes that the Army will withdraw its forces committed to OOTWs to respond to an MRC. Neither the Army nor the defense guidance acknowledges any potential for delays or degradation of mission capability of forces previously assigned to OOTWs in determining the Army’s support force requirements. However, both the Army’s own analyses and comments from the CINCs question this assumption. For example, as part of its risk assessment for TAA, the Army conducted an excursion to determine whether involvement in a significant OOTW would result in insufficient support force structure for the first 30 days of an MRC. The Army analysis found that about 15,000 active support forces participating in a sizable OOTW were required for this first MRC. The Army assumed it could extract these forces from the OOTW without delays or degradation in capability, but it provided no analysis to support this position. In contrast, TRADOC Analysis Center, in conducting a classified study on strategic risks, assumed as a given, that 20,000 Army active component resources would be committed to one or more OOTWs and would not be available to participate in the two-MRC war fight. Another TRADOC analysis has highlighted the reconstitution challenges encountered when moving support forces from an OOTW environment to an MRC, where personnel and equipment requirements frequently differ. During the planning phase of TAA 2003, the Forces Command commander recommended that the Army first determine the level of force structure it was willing to commit to OOTWs and then exclude this OOTW force from participating in the first MRC war fight. Both the TRADOC Analysis Center and the Forces Command commander were acknowledging that extraction from OOTWs could not be performed without consequences. CINCs also expressed concern regarding the Army’s handling of OOTWs in TAA 2003. For example, the CINC, U.S. Atlantic Command, stated that his major concern was in transitioning from an OOTW to an MRC, especially in the case of units with unique or highly specialized training and/or equipment. Similarly, the CINC, U.S. European Command asserted that some allowance must be developed in TAA to account for OOTW-type requirements, considering (1) their impact on a heavily committed resource base (i.e., active Army combat and support personnel) and (2) the time necessary to extract the troops from such missions if U.S. forces must be shifted to contend with an overwhelming threat to U.S. strategic interests. The CINC believes this is particularly important because U.S. commitments to these operations are significant and the trend to involve U.S. forces in such operations is on the rise. Our past review further supports the CINCs’ concerns. We reported that critical support and combat forces needed in the early stages of an MRC may be unable to redeploy quickly from peace operations because certain Army support forces are needed to facilitate the redeployment of other military forces. In addition, our follow-on peace operations study cited this deficiency as significant, because in the event of a short-warning attack, forces are needed to deploy rapidly to the theater and enter the battle as quickly as possible to halt the invasion. As part of its analysis of the four OOTW excursions, the Army developed troop lists and overall size estimates for each type of OOTW. These force size estimates suggest that multiple OOTWs could result in a major commitment of personnel resources—resources that have not been fully evaluated in the TAA process. This is the view of the current CINC, U.S. European Command, based on his expanded troop involvement in Bosnia, Macedonia, Turkey, and Africa. The CINC asserts that essential support personnel have been stretched to the limit for resourcing the above military operations in his area of geographic responsibility, including those associated with providing fuel supply and distribution capacity, heavy truck transportation, military police, fuel handling, and communications repair. By their nature, these operations tend to be manpower intensive. Thus, the CINC stated that the next TAA process should consider how to include specific operational scenarios of a lesser regional scale (i.e., OOTWs), in addition to the two MRCs. The Army lacks the quantitative data to assess how such potentially burdensome and repeated deployments of support troops in OOTW-like operations impact the Army. However, comments from both the CINCs and some Army officials suggest the need for improved force structure planning for such contingencies. Army officials responsible for TAA responded that the Army must assume that the forces needed for OOTW-type operations will come from the same pool of forces identified for use in the event of one or more MRCs, because this is a defense guidance requirement. As a result, the Army plans no future changes in how TAA approaches multiple OOTWs and their resourcing implications. This includes TAA 2005, which is now underway. In resourcing the Army’s support requirements for fighting two MRCs, the Army did not consider all available personnel at its disposal. By better matching available personnel with its requirements, we believe the Army could mitigate some of the risks disclosed in TAA 2003 results. Specifically, TAA did not consider support capabilities that currently exist in the National Guard’s eight divisions, civilian contractor personnel, TDA military personnel, or civilian defense personnel. Considering these personnel, most of which would be suitable to meet requirements for later deploying units, could enable the Army to somewhat reduce its shortfall of support personnel. However, it would not resolve the Army’s shortage of active support personnel to meet requirements in the first 30 days. TAA gave limited recognition to some host nation support to reduce the number of positions in unresourced units to 44,000, but is reluctant to place greater reliance on this resource until DOD resolves major issues as to when and how much support host nations will provide. In TAA 2003, the Army did not consider how to use the support capability that currently exists in the eight Army National Guard divisions that the Army does not envision using during a two-conflict scenario. Based on the Army’s analysis, some support capabilities in the National Guard divisions are similar or identical to support units in short supply. In our March 1995 report, we found that personnel in these divisions could be used to fill 100 percent of the vacant positions for 321 types of skills, including helicopter pilots, communications technicians, repair personnel, military police officers, intelligence analysts, and fuel and water specialists. In response, DOD formally concurred with our recommendation that the Army identify specific support requirements that could be met using National Guard divisional support units and develop a plan for accessing that support capability. This capability was not considered in TAA 2003 and we know of no plans to consider it in TAA 2005. Army officials advised us that while the National Guard units have specific personnel and equipment that could be used in wartime, the units do not clearly correlate with support units, and would likely deploy piecemeal rather than as full units, as the Army prefers. For this reason, Army officials advised us that there are no efforts underway to consider these personnel in TAA, as we recommended, even though in a wartime situation, the Army would, in fact, make use of these resources as a “fallback.” Since Army officials agreed that in some cases (for example, transportation), there may be potential for deployment to MRCs, planning how to access these forces in advance could reduce the number of unfilled positions in TAA. However, it would not reduce the Army’s late risk (i.e., the risk that forces might not arrive in the first 30 days of the first MRC), since these forces could not be mobilized, trained, and deployed in time. Contract personnel were also not considered in TAA 2003. The Army is already making greater use of contract personnel to provide many of the support services typically provided by its combat service support personnel. For example, through its Logistics Civil Augmentation Program, the Army has used contractor personnel to provide base camp construction and maintenance, laundry, food supply and service, water production, and transportation. In terms of timing, the Army’s current contract calls for logistical and construction support to be initiated within 15 days of the Army’s order. Among the most recent operations using contractor personnel are: Operation Restore Hope (Somalia); Operation Support Hope (Rwanda); Operation Uphold Democracy (Haiti); Operation Joint Endeavor (Bosnia); and Operation Deny Flight (Aviano, Italy). Civilian contractors were also used extensively in both the Korean and Vietnam wars to augment the logistical support provided to U.S. forces. However, the Army made no assessment in TAA 2003 to determine how much of its unresourced requirement could potentially be offset by contractor personnel. TAA 2003 also did not consider the potential use of TDA military personnel (with the exception of medical) and civilians, even though, in some instances, these personnel can and do deploy—sometimes on very short notice. Chapter 3 will discuss the need to unify the Army’s separate processes for allocating personnel to TOE and TDA, so that personnel who perform similar functions are considered together. Another potential resource pool the Army could consider to a greater extent is host nation support. To minimize war-fight risk, the Army does not use host nation support to offset requirements without a signed agreement from the host nation, and then only in cases where the joint war-fighting command is confident the support will be provided when and where needed. Host nation support that meets this test is only used to offset requirements for units that were not allocated any positions in TAA. In TAA 2003, host nation support offset over 14,000 of these positions. OSD officials who have reviewed TAA 2003 suggested that the Army place a greater reliance on host nation support by relaxing the requirement that the United States have formal agreements with the host nation to provide the support. OSD estimates that the Army could reduce its support force shortfall by as much as 42,000 if it were to count on likely host nation support even though formal agreements may not be in place. However, the Army’s current position is consistent with that of the Secretary of Defense, as reported in the Fiscal Year 1995 Annual Statement of Assurance to the President and Congress, under the Federal Managers’ Financial Integrity Act. In that statement, the Secretary cites a material weakness in the Central Command’s program for validating quantities of wartime host nation support presumed to be available for use by U.S. forces, but not documented by formal agreements. The Central Command’s corrective action plan requires that lists of commodities and services required from the host nations be organized by location and time of availability and that the host nations’ political and military leaderships agree to these lists. We followed up with the Central Command to determine the status of their corrective action plan and were told that while efforts were underway to obtain such agreements, nothing was definite. Chapter 4 addresses further actions under way to respond to OSD’s analysis. While TAA is an analytically rigorous process, it is not an exact science. There are many assumptions and uncertainties involved in sizing Army support forces, and seemingly small changes can dramatically alter its final outcome. Among TAA’s strengths are that it bases many of its decisions on established Army doctrine, involves senior leadership throughout the process, and includes consensus building mechanisms among the branches. On the other hand, the Army may be able to improve some aspects of TAA’s methodology. For example, not all TAA model inputs were scrutinized to ensure they were free from error; the process does not easily accommodate changes that occur during its 2-year implementation cycle; TAA’s transportation model is not rerun with the required force; and the Army does not prioritize deficiencies that remain and develop action plans to mitigate risk. Participants’ exposure to TAA modeling was limited and focused on the results of the war gaming, not its methodology and detailed assumptions. Nonetheless, in TAA 2003, participants detected errors in model inputs late in the process, after the models had been run and requirements had been identified. While allocating positions, participants began to question whether fuel and water consumption rates had been understated. Since the TAA process had already been delayed as the Army considered how to account for OSD’s planned 20,000 reduction in end strength, the Army had an opportunity to convene a supplemental conference to allow time to rerun the models with revised inputs. The result was an additional support requirement of 48,000 positions. This experience caused some participants to question the degree to which the Army had scrutinized its planning data and assumptions. It also provides an illustration of how changes in the model inputs can dramatically alter the final results of TAA. In another example, the Army was able to reduce its medical-related support requirements in TAA 2003 by reducing the medical evacuation time from 30 to 15 days. Previously, the policy was 15 days within the first 30 days of the conflict and 30 days thereafter. This one change, which was supported by the Army’s medical branch, reduced the need for hospital beds in theater by 35 percent. This change led to reductions in branches like engineer and quartermaster, and in some types of medical units. Both OSD and Army officials agree that key model inputs, such as those for fuel, ammunition, and medical, need to be reviewed and validated because they can have such a significant impact on TAA results. The Army is responsible for providing certain logistics support to the other services during the two MRCs. TAA acknowledged the need for Army personnel to support the Air Force, the Navy, and the Marine Corps forces, and the Army solicited their wartime requirements through the war-fighting CINCs. For example, in TAA 2003, the Army’s assistance consisted primarily of providing overland transport of bulk fuel and ammunition. Based on CINC inputs, the Army added about 24,000 support positions to assist the other services in these areas, meeting 79 percent of their requirements. According to a Forces Command official, during a war, the war-fighting CINCs determine where to allocate these personnel. Additionally, some Army officials believe that some of the logistical support requirements such as those for transportation may be understated because the Army typically receives a poor response from the CINCs concerning the other services’ requirements. The Army acknowledges it needs more accurate estimates of the other services’ needs. Because of the time needed to complete a full TAA cycle, almost 2 years, the Army may find that key assumptions or data inputs, while valid at the time, have essentially been overtaken by events. TAA has a limited ability to accommodate changes in strategy or key assumptions that occur beyond its initial planning phase. This inability to accommodate change undercuts the Army’s case that TAA is focused on the future, that is, Army force structure required 9-years out. The following examples in TAA 2003 illustrate this point. First, soon after TAA 2003 was completed, the Secretary of Defense issued new guidance reflecting a significant change in scenarios. TAA 2003 assumed that the MRCs would be sequenced differently, consistent with earlier guidance. A subsequent analysis by the Army showed that if the more current guidance had been used, an additional 40,000 warfight support positions would have been required. TAA 2005 could also be impacted by changes in defense strategy since the Army plans to run its models based on the existing two-conflict strategy. The ongoing Quadrennial Defense Review could change this strategy and lessen the usefulness of the Army’s TAA results. Second, in the middle of the TAA 2003 process, OSD issued a directive for the Army to reduce its active end strength by 20,000 toward a goal of 475,000 as early as practical, but no later than 1999. Army officials told us that TAA could not accommodate this change since it could not anticipate what parts of its force would be affected by the mandated cut, and any changes to its combat forces would affect how the Army fights. This, in turn, would result in changes to various inputs to the war fight model itself. The TAA process could be enhanced if additional analyses were conducted to reveal the impact of force size on the movement of forces to fight two major conflicts. The Army could have refined its mobility assessment by running the TAA 2003 required force through its transportation model, rather than exclusively relying on the earlier TAA 2001 required force. TAA models were run in the early stages of the process using a prior TAA (i.e., TAA 2001) generated force structure to establish a baseline for flowing forces into theater and to fight the war. At the conclusion of this phase of TAA, the Army determines its total war-fighting requirement. However, the Army does not rerun its models with this “required” TAA 2003 force to assess the impact of this larger force on moving forces to theater. Army officials agreed that rerunning its transportation model using the required force would improve TAA, and the Army is currently considering how to use its iterative modeling capability to its best advantage in TAA 2005. The Army does not prioritize force deficiencies that remain after TAA is completed and all force structure decisions are made, nor does it indicate what is being done to mitigate war-fighting risks. Examples of risk reduction measures include: use of new technology to overcome personnel shortages; new training initiatives (e.g., cross training personnel to perform more than one function); changing doctrine where appropriate; or drawing on other resource pools not addressed in TAA (e.g., civilians, reserves, and contractors). Although not formally documented in the TAA 2003 process, the Director of Army Force Programs told us that he is identifying actions to further mitigate the risks identified in TAA 2003. The Director cited studies on the feasibility of home station deployment and having unequipped reservists falling in on prepositioned equipment located in counterpart active Army units (e.g., the Army’s truck fleet could handle a greater workload if it had more drivers to take more shifts). In a period of declining resources, actions such as these could help the Army use its available resources more efficiently. While the Army believes it can support two MRCs, given existing force levels, and 10 fully active divisions, it has accepted some risks—most notably the lack of sufficient active support forces during the first 30 days of an MRC. TAA results indicate that 42 percent of all required support forces needed in the first 30 days of the first conflict will arrive late—about 79,000 soldiers. These late arrivers are tasked to provide essential services such as medical, engineering, transportation, and quartermaster support. The Army is also counting on the arrival of about 15,000 predominantly support personnel previously deployed to OOTWs during the first 30 days, even though CINC and Army officials question their availability and readiness during this time frame. Further, because the Army discounts peaks in demand in establishing its requirements through a technique called “smoothing,” actual workload for some types of units during the first 30 days is actually much higher than TAA 2003 requirements reflect—almost twice as high for some transportation units. Finally, TAA results reveal that the Army will have few active support forces—about 12 percent of total support forces required—available to support the second MRC and that 19,200 required active support positions in existing units are not authorized to be filled. Moreover, units totaling 58,400 positions are not authorized any personnel at all because the Army’s total wartime support requirement exceeds available personnel authorizations. The Army plans to mitigate this risk by relying on host nation personnel and converting some Army National Guard combat forces to support forces. These conversions are not yet funded and could take many years to be accomplished. Our examination of TAA assumptions and model inputs found that the Army used many favorable assumptions that may have understated risks to U.S. forces, such as limited chemical use by the enemy, assured port availability, and no delays in the call-up of reserves forces. In particular, the Army does not appear to have adequately considered delays or degradation in capability resulting from the extraction of forces from an OOTW to a major conflict, or to the potential demands on support forces resulting from multiple OOTWs. War-fighting commanders believe that such multiple OOTWs will add to the Army’s war-fighting risk. Since the Army does not conduct sensitivity analyses to assess the impact of less favorable assumptions, it does not know the extent to which changes in these underlying assumptions would increase Army support requirements and related risks. On the other hand, the Army could mitigate some risks by expanding its resource pool to include support capabilities that currently exist in the National Guard and TDA forces, as well as contract services—resources that, with the exception of medical, are presently excluded from TAA. While TAA is an analytically rigorous process with extensive modeling and wide participation by key Army personnel, some aspects of its methodology could be improved. Some participants questioned whether the Army had sufficiently scrutinized key model inputs, such as consumption factors for fuel and water. In addition, by not rerunning the campaign models with its required force, the Army missed an opportunity to fully assess how mobility limitations affected risk. To improve TAA’s ability to accurately project war-fighting requirements and allocate the Army’s personnel resources, we recommend that the Secretary of the Army reexamine key model inputs to ensure they are accurate and consistent perform analysis to determine how multiple OOTW support force requirements might differ from support force requirements based on two MRCs and bring any variances to the attention of the Secretary of Defense so that he can consider them in developing defense guidance; perform sensitivity analyses on significant model inputs, assumptions, and resourcing decisions to determine their impacts on war-fighting risk. For example, although the Army used assumptions established by defense guidance, determining the implications of less favorable conditions, such as delayed call-up of reserves, would provide the Army with additional information on which to base its assessment of risk; rerun TAA models with the required force to assess the impact of force size on mobility requirements; and determine how support units resident within the eight National Guard divisions, TDA military personnel, contractor personnel, and DOD civilians can be used to fill some support force requirements. In written comments on a draft of this report DOD fully concurred with four of our recommendations and partially concurred with one (see app. V). DOD noted that the Army has already planned some actions to resolve issues we identified. For example, DOD stated the Army is closely scrutinizing its model inputs for TAA 2005, beginning with a rigorous review of all 3,000 allocation rules, and major studies to review fuel consumption factors and casualty rates. The Army also plans to analyze the impact of multiple OOTWs on support requirements and agreed that the current assumption that all units involved in OOTWs will be immediately available for the war fight is flawed and overly optimistic. The Army also plans to conduct other sensitivity analyses and excursions in TAA 2005, beyond those required by defense guidance. Further, the Army will rerun TAA models with the required force to provide the force flow data needed to improve its analysis of risk. However, DOD only partially concurred with our recommendation to consider other personnel resources in filling its support force requirements. The Army plans to consider some types of Army National Guard Division assets to fill support force shortfalls where the capabilities are nearly a match, such as aviation assets. The Army also plans to further analyze how to use its TDA structure to meet both OOTW and war-fighting requirements. In the future, deployable TDA forces will be considered part of the Army’s operating force. However, DOD differs with us on recognizing civilian contractor personnel in TAA. The Army believes that while contractor personnel enhance the Army’s capabilities, they should not be considered an available resource in TAA since contractor personnel are not funded in the outyears of the Program Objective Memorandum. The Army also expressed concern about its ability to provide security to contractors in an MRC environment. Because contractor personnel have historically been used by the Army to provide support in many different types of overseas environments, both OOTWs and MRCs, we believe that, as a minimum, the Army could treat contractor personnel in the same way it treats host nation support—as an offset to unmet requirements. The Army can make assumptions concerning the funding of the Logistics Civil Augmentation Program, just as it makes assumptions about such issues as the availability of host nation support, the size of the active Army force, or the level of modernization of the force in future years. Despite numerous Army initiatives to improve its TDA requirements determination process since the late 1970s, the Army cannot allocate its TDA personnel based on the workload required to complete TDA missions. As a result, the Army does not have a tool to prioritize TDA functions and has made across-the-board cuts in TDA that are not analytically based. Ongoing command and Army-wide initiatives to manage TDA based on workload, to include analyzing what work needs to be done and assessing how processes can be improved, will require senior Army leadership support for successful implementation. The Army has reviewed some TDA functions and identified a potential to reduce its TDA by up to 4,000 military positions as a result of its initial streamlining efforts. However, the Army’s end strength will not be reduced; rather, the positions will be used to offset shortfalls in TOE support forces. Plans for some of these initiatives, however, have not been finalized and it is difficult to definitively quantify some savings. Army TDA streamlining will continue through 2007. The Army is evaluating several options to consolidate its major commands, which could further reduce TDA requirements for active military personnel and introduce more efficient business practices. However, such a reorganization could be hampered without workload-based requirements. The Army’s potential for streamlining TDA will also be limited by several laws and regulations, such as civilian downsizing and TDA positions that are protected from Army force reduction initiatives. Finally, some personnel in TOE and TDA units perform similar functions which calls into question the need for separate resourcing processes. Some features of the Army’s process for using TDA medical personnel to fill positions in TOE medical units may provide a model for other functions with both TOE and TDA missions. Weaknesses in the Army’s ability to fully define force requirements for the institutional Army in terms of workload are long standing and have been reported by us and the Army since the late 1970s. Workload-based management is designed to help managers determine the resources needed to complete a job and logically respond to resource cuts. For example, using workload-based management, a manager could determine how many trainers would be required to train a certain number of students in a specified period of time. Weaknesses in its program leave the Army unable to analytically support its TDA requirements or define the risks of reducing this portion of the Army forces. Further, a weak requirements process prevents the Army leadership from making informed choices as to possible trade-offs among TDA functions and commands based on highest priority needs. According to Army regulation and policy, force requirements are to be logically developed from specific workload requirements derived from mission directives. Responsibility for allocating personnel resources to fulfill TDA missions belongs to the major commands. For fiscal year 1998, the Army projects its TDA force at over 123,000 military positions and over 247,000 civilian positions. Although TDA functions are carried out by military and civilian personnel depending on the type of mission, our focus was on the active military Army. Table 3.1 shows the distribution of active military TDA positions for fiscal year 1998. In response to our 1979 report criticizing the Army for its lack of workload-based information on which to determine personnel requirements, the Army developed a workload-based personnel allocation system, known as the Manpower Staffing Standards System. This system was intended to determine minimum essential requirements to accomplish TDA workload and identify operational improvements to increase efficiency and effectiveness. However, command officials told us that this process was time consuming and labor intensive, taking as long as 3 years to analyze a single function, and that the standards generated by it were often obsolete by the time they were issued. In 1994, the Army Audit Agency found that, as a result of these problems and lack of management tools to collect workload data, managers were not able to effectively determine or manage their TDA workloads and thus could not be assured that limited personnel resources were being distributed to the highest priority functions. During our review, Army headquarters officials acknowledged that the Army cannot articulate its TDA force structure in terms of workload, and we found varying levels of compliance with the Army’s workload-based management regulation at the major commands we contacted. The Intelligence and Security Command, with a 1998 TDA active end strength authorization of over 6,000, does not have a formal manpower study program due to downsizing and changes in workload. Allocation of TDA resources is done based on command guidance with functional staff’s input. An official at Forces Command, which has a 1998 active component TDA of about 13,000, told us that workload-based manpower management had not been a high priority in recent years because of turmoil in the workforce caused by downsizing and reallocation of workload due to base realignments and closures. Forces Command has a plan to conduct a comprehensive manpower assessment at each of its installations by the year 2000. This assessment will include validating work requirements, developing manning levels based on workload, and using cross-installation comparisons of functions to establish a model for future manpower requirements determination. TRADOC, the Medical Command, and the Army Materiel Command had more extensive workload-based management processes. Both the Medical Command and TRADOC employ workload-based standards for about 60 percent of their TDA positions and have processes to review workloads and resource allocations according to established requirements. The Army Materiel Command, which has a largely civilian workforce, began a review of all of its functions in March 1995 and completed this review of over 60,000 authorized military and civilian positions in January 1997. The review includes validating units’ requirements, analyzing and projecting workload, and applying available resources to that workload. Having visibility over the workload and the resources needed to complete it gives commanders greater control over their resources and enables them to identify inefficiencies. For example, at the Medical Command, the Surgeon General holds “bankruptcy hearings” for units that exceed established workload benchmarks. The Assistant Secretary of the Army for Manpower and Reserve Affairs has developed a new methodology for workload-based management that is intended to address concerns that the Army does not know how big its institutional force needs to be to satisfy its requirements. The Army’s methodology includes an analysis of (1) the work that needs to be done based on organizational mission, (2) how to improve processes through better methods, benchmarking, capital investment, automation and improved facilities, and (3) the most appropriate technique for linking people to work. In addition, the Army is pilot testing an automated system for collecting and analyzing workload information and monitoring efficiency based on time spent completing functions. Army officials told us that the system could provide managers at all levels significant visibility over TDA resources and could ultimately be used to make trade-offs among TDA functions Army-wide. The Assistant Secretary’s office is also increasing its review of major commands’ requirements determination processes. Differing management philosophies on the use of workload-based requirements could challenge the Army-wide adoption of workload-based management. For example, one resource management official told us that he preferred across-the-board percentage cuts rather than cuts weighted according to workload, because this allows the commanders more autonomy in how they allocate their resources. In October 1996, the Assistant Secretary of the Army for Manpower and Reserve Affairs stated that a challenge to adopting workload-based management will be changing the perspective of resourcing officials from a philosophy of managing personnel resources based on budget to managing personnel resources based on workload. Although managing to budget allows commanders to allocate resources based on available budgets, we believe that using it as the sole-allocation process does not provide the commander a vision of what cannot be done as a result of declining budgets and may discourage commands from identifying efficiencies if they know they will be receiving a cut regardless. In addition, managing to budget does not provide an analytical basis on which to make trade-offs among TDA workload priorities. For example, during deliberations for TAA 2001, which was completed in 1993, an attempt by major command representatives to allocate a cut in TDA positions among their commands ended in gridlock, in part due to the lack of an analytical basis on which to divide the resources. The result was that each command’s TDA military positions were cut by 7.5 percent, regardless of its individual missions or requirements. Such a cut impacts some commands more than others. For example, Intelligence and Security Command officials told us that 75 percent of its officers were controlled by other agencies; therefore, it could not eliminate any of these positions. As a result, an across-the-board 7.5 percent reduction applied to Intelligence and Security Command officers fell disproportionately on the remaining 25 percent of its officers that the command had authority over. Efforts to allocate resources based on workload will require the support of the Army leadership to be successful. The long-standing weaknesses with the Army’s process, despite numerous efforts to improve it, suggest that a higher level of reporting and oversight may be warranted. However, the Army has not reported its historic lack of compliance with its workload-based allocation policy as a material weakness under the Federal Managers’ Financial Integrity Act (P.L. 97-255). Policy implementing the act requires agencies to establish internal controls to provide reasonable assurance that programs are efficiently and effectively carried out in accordance with applicable law and policy. One criterion for determining whether an internal control weakness is material is if it significantly weakens safeguards against waste. If lack of workload analysis, which does not comply with Army policy and does not safeguard against waste, was reported to the Secretary of Defense as a material weakness, the Secretary of the Army would be required to develop a corrective action plan with milestones for completion. As required by OSD guidance, responsible OSD officials would then need to assess whether this problem is a DOD-wide systemic weakness and whether it is a weakness of sufficient magnitude to be reported in OSD’s annual statement of assurance to the President and Congress. Despite the lack of workload data to define specific requirements of the TDA force, the Army is re-engineering its processes and redesigning the overall TDA organization through a series of streamlining initiatives. Although these efforts have some aspects that are similar to workload analysis, these are one-time, Army-wide assessments intended to provide a forum for re-engineering many Army functions. The Army defines re-engineering as a “fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance.” In contrast, workload management is a tool for conducting more micro-levels of analysis on a unit-by-unit basis. The streamlining and re-engineering effort, known as the Force XXI Institutional Army Redesign, is one component of the overall Force XXI redesign. The other two components are the redesign of the combat forces and an effort to incorporate information age technology into the battlefield. The institutional redesign will take place in three phases to correspond with presidential budget cycles. Phase I, completed in March 1996, resulted in modifications to the 1998-2003 Army Program Objective Memorandum. Phases II and III will be completed in time to update the 2000-2005 and the 2002-2007 budgets, respectively. As a result of the phase I reviews of TDA missions, to include acquisition, training, mobilization, recruiting, personnel management and redesign of the Department of Army Headquarters, the Army eliminated 13 headquarters offices, realigned a major command, and identified almost 4,000 active military positions that will be cut from TDA and transferred to the TOE end strength between 1998 and 2003. Before the TDA cuts were identified, TAA 2003 applied 2,000 TDA positions to unmet support force requirements in anticipation of the streamlining results. Officials told us that the remaining 2,000 positions will also be transferred to the deployable portion of the force to fill shortages in units that are at less than full strength, although they could not specify the units. Furthermore, many of the 4,000 positions that are being shifted are based on initiatives that have not been fully tested or approved. Thus, the expected savings are not assured. The largest single planned transfer of 2,100 positions is the result of an Army proposal to replace active TDA military assigned to the Senior Reserve Officer Training Corps with reserve component, noncommissioned and retired officers. This proposal is being studied by TRADOC and would require a change in legislation to authorize the use of retired and additional reserve personnel, according to the Army. If pilot testing shows the concept is infeasible, or if the legislative enabler the Army is proposing is not passed, the Army would need to find a means to accomplish this function since it has already taken these TDA reductions. In another example, the Army anticipates reducing attrition, thereby freeing up 750 TDA positions associated with training and recruiting. The Army’s plan to reduce attrition is based primarily on establishing an advisory council to provide commanders with attrition statistics and review policies that impact attrition. As a result, the Army cannot be certain that the anticipated TDA transfers can be realistically accomplished. The Army’s efforts to streamline its institutional force are linked to a conceptual model delineated in a draft Department of the Army Pamphlet, 100xx entitled “Force XXI Institutional Force Redesign.” The model identifies the core competency of the TDA force, divides this competency into 4 core capabilities, and divides the 4 capabilities into 14 core processes, as shown in table 3.2. The Army plans to align its organizations around the core capabilities and core processes, so that there would be one office with lead responsibility for each process. For example, under the current structure, several commands, including TRADOC, the Intelligence and Security Command, and U.S. Army, Europe, have responsibility to develop Army doctrine. Under the streamlined model, TRADOC would have the lead responsibility for doctrine writing. The Army will use this framework to align the TDA organization with core processes. The Army has developed three organizational models that would reduce the number of major commands and are intended to eliminate duplication, establish clearer lines of authority, streamline resource management, and could further reduce TDA military personnel. For example, one model would reduce the Army from its current structure of 14 major commands to a total of 10 commands, with 3 major commands and 7 Army service component commands to support the CINCs. The three major commands would be aligned to the “Develop the Force,” “Generate and Project the Force,” and the “Sustain the Force” core capabilities with the Department of the Army Headquarters assuming responsibility for the “Direct and Resource” capabilities. However, these models are illustrative and were presented as a starting point for further discussion and do not directly address shortfalls in defining requirements based on workload. As such, officials said they could not provide a specific date on which any of these models would be in place or estimate how many positions might be saved through streamlining. Additional streamlining of the Army’s TDA force must accommodate limitations from legislative, regulatory, and budgetary guidance. These actions can influence the size and composition of the institutional Army force, but are outside the Army’s span of control. For example, DOD’s ongoing civilian drawdown limits the Army’s ability to convert military positions to generally less expensive civilian positions. In 1994 and 1996, we reported that there were opportunities for the Army to convert certain enlisted and officer support positions from military to civilian status, but to overcome impediments to conversion, the Secretary of Defense would need to slow the civilian drawdown, or the Congress would need to reprogram funding. Further, officials in the commands we visited pointed to budgetary challenges to converting military positions to civilians. First, the commands are reluctant to convert military positions to civilian positions because they cannot be assured that operations and maintenance money, which funds civilian pay, will be available to hire a new civilian. Officials told us that the transfer of a military position to a civilian position is authorized years before the civilian is hired and sometimes by the year of execution, inadequate operations and maintenance funding prevent the command from hiring a new civilian. Second, local commanders have a disincentive to civilianize because civilian positions are paid in full from the installation’s budget while military personnel are paid out of the Army’s centralized military personnel budget. Also, some active military TDA positions are required by law or controlled by other agencies. As a percentage of the active component TDA force, these positions, sometimes referred to as “fenced” positions, will have increased from 29 percent in 1991 to a projected 37 percent in 2001. For example, the National Defense Authorization Act for Fiscal Year 1991 restricts the Secretary of Defense from reducing medical personnel without providing certification to Congress that the number reduced is in excess of that required and that the reduction would not cause an increase in costs to those covered under the Civilian Health and Medical Program of the Uniformed Services. Positions controlled by other agencies include those assigned to the National Foreign Intelligence Program. Under executive order, these positions are required and budgeted by the Director of Central Intelligence and cannot be reallocated without his permission. Table 3.3 summarizes the major categories of fenced positions and the change from 1991 to 2001. Although fencing ensures that selected high-priority missions are adequately staffed, to the extent positions are fenced, the Army must disproportionately reduce other non-fenced TDA categories to absorb across-the-board reductions. The distinction of a deployable TOE force and a nondeployable TDA force is becoming less clear and calls into question the necessity of maintaining separate processes to allocate personnel resources. The draft Army Pamphlet 100xx acknowledges a blurred distinction between operational and institutional forces because institutional forces are increasingly being called on to perform tactical support functions in areas such as intelligence, communications, transportation, logistics, engineering, and medical support. For example, an Intelligence and Security Command official told us that all of its TDA military personnel along with almost 600 civilians at the command are considered deployable. At Forces Command, we were told that TDA personnel assigned to directly support a TOE unit are expected to deploy with that unit. Another example is military police. In recognition of historical deployments of TDA military police to support law and order operations in theater, the Army plans to convert 1,850 TDA military police positions to TOE. The initiative would establish modular military police organizations that would be designed to provide capabilities in peace, conflict, and war. However, with the exception of medical, TDA specialties with potential use in a deployment are not considered available to be distributed among requirements in the TAA process. TAA does not model the relative risks of reducing TDA units compared to reducing below-the-line support TOE units. Nor does it consider trade-offs between below-the-line support units and support units embedded in combat divisions. Thus, the Army could overstate the risk of shortages in a below-the-line TOE branch, when in practice, TOE support units in combat divisions or TDA personnel are capable of performing similar functions. A unified resourcing process would give the Army visibility over all capabilities available to complete its missions, regardless of their classification as TOE or TDA. The Army’s process for handling medical requirements may provide a model for functions that are resident and required in both TOE and TDA forces. During peacetime, some deployable hospitals are maintained by a small cadre of personnel. During deployments, these hospitals are filled in with designated TDA medical personnel whose peacetime TDA mission is to staff Army medical treatment facilities. Medical reservists are in turn called up to back fill the medical treatment facilities. In TAA 2003, about 5,000 requirements were filled with predesignated TDA medical positions. While it may not be feasible to back fill certain specialties with reservists, two features of the medical model could be reviewed for broader application. First, the medical model formally recognizes and quantifies the dual duties of personnel assigned to TDA functions in peacetime but expected to deploy in operations. Second, it gives visibility to all medical assets, regardless of their classification as TOE or TDA forces. Army initiatives to analytically define and allocate TDA resources according to workload have not been effective. Although ongoing initiatives show some promise, they will require significant support by the Army leadership. If implemented, workload-based management could identify opportunities to streamline TDA functions and ensure that active military positions are allocated most efficiently. Of the potential 4,000 required positions for transfer to TOE by the Force XXI institutional redesign, many are contingent on Army plans that have either not been finalized or that are difficult to quantify. As a result, the anticipated reallocation should be viewed with caution. There is potential for further savings as the Army streamlines its TDA by aligning the organization with TDA core processes; however, streamlining may be limited by legislative, regulatory, and budgetary guidance. The reliance of TOE units on TDA personnel to complete missions calls into question the need for separate resourcing processes. A more unified process would permit the Army to consider how it can best meet requirements from a wider range of personnel at its disposal. In addition, it would allow for better management of personnel resources—one of the Army’s most expensive budget items. To improve the management and allocation of personnel resources to the institutional Army, we recommend that the Secretary of the Army report to the Secretary of Defense the Army’s long-standing problem with implementing workload-based analysis as a material weakness under the Federal Managers’ Financial Integrity Act to maintain visibility of the issue and ensure action is taken and closely monitor the military positions the Army plans to save as the result of Force XXI initiatives and have a contingency plan in place in the event that these savings do not materialize. DOD’s comments on these recommendations appear in appendix V. DOD agreed that the Secretary of the Army should report its long-standing problems in managing its institutional personnel as a material weakness under the Federal Managers’ Financial Integrity Act and develop a sound basis for allocating resources to these functions. As part of this effort, the Army intends to assess the potential benefit to the Army of new workload-based management tools being pilot tested by an office of the Assistant Secretary of the Army. DOD also concurred with our recommendation that the Secretary of the Army closely monitor the military positions saved under Force XXI. The Army’s intent is to apply any such savings to authorization shortfalls in existing support units. However, the Army acknowledges that it is too soon to speculate on the size of any future savings. Reducing active Army support forces does not appear feasible now based on TAA 2003 results, which show that the Army cannot meet its early deployment needs. But a smaller combat and TDA force may be possible in the future, based on ongoing Army initiatives and efforts under way to review U.S. defense strategy and forces. Nevertheless, OSD’s current position on active Army end strength was not supported by detailed analysis. OSD cited potential end strength savings from the Army’s Force XXI streamlining initiatives as a basis to reduce the Army’s end strength to 475,000. However, while Force XXI’s emphasis on digitization and more efficient logistics practices may achieve end strength savings in the long term, these savings do not appear likely to occur by 1999, the time frame OSD established to achieve the 20,000 position drawdown. Following its decision to reduce the Army by 20,000 positions, OSD reviewed TAA 2003 results. OSD’s study questioned the Army’s determination of its support requirements but did not examine downsizing of the active Army. OSD’s assessment of the appropriate size of the active Army could change as a result of the congressionally mandated Quadrennial Defense Review. DOD is expected to assess a wide range of issues, including the defense strategy of the United States, the optimum force structure to implement the strategy, and the roles and missions of reserve forces. The number of divisions required or the mix of heavy and light divisions may change if a new strategy is adopted. Also, options may exist for restructuring the Army’s active divisions by integrating some reserve forces. Options to expand the role of the reserves would have the effect of reducing requirements for active combat forces. In April 1995, to free resources for modernization programs, OSD directed the Army to reduce its end strength by 20,000 no later than 1999. This guidance was reflected in DOD’s 1997 FYDP, which reduced the Army’s active force by 10,000 positions in both 1998 and 1999, along with related military personnel funding. However, in March 1996, the Army Chief of Staff testified that the active Army should not get any smaller. Instead, the Army planned to identify savings within its own budget sufficient to avoid the 20,000 position reduction. A memorandum from the Secretary of Defense cited the Army’s Force XXI initiative as the means by which the Army would identify efficiencies to reduce the force. However, according to Army documentation, Force XXI’s primary focus is to increase capability by leveraging technology, not to attain specific end strength reductions. The Army is experimenting with ways to streamline its TOE forces through its Force XXI redesign of its combat divisions, known as Joint Venture. For example, Joint Venture’s focus on increasing situational awareness by digitizing the battlefield and better managing logistics could reduce the size of Army divisions. However, the division redesign is not yet finalized and will not be fully implemented until 2010. The Army’s streamlining of its TDA force under Force XXI has identified about 4,000 excess active military spaces, but the Army plans to reallocate those spaces to fill unmet requirements in active TOE support forces. The Army’s efforts to streamline TDA under Force XXI, and additional streamlining initiatives and policy changes proposed by Army leadership, enabled the Army to increase its military personnel account throughout its fiscal year’s 1998-2003 Program Objective Memorandum to pay for the 20,000 spaces eliminated in DOD’s 1997 FYDP. Based on Army projections, we estimate that from 1998 to 2003, the Army will need about $3 billion in savings to pay for the 20,000 positions. The Army has identified almost $9 billion in savings over that same period, but considers only about $2 billion of those savings as finalized; the remaining $7 billion will require coordination and oversight among several Army organizations to be realized. For example, recommendations to reduce logistics costs, including reductions in acquisition lead time and spare parts inventories, account for over $2 billion in savings and will result in overhead cuts to the logistics community. The benefit of the overhead cuts will be realized by the commands through lower logistics costs. An Army official told us that such a disconnect between the entity doing the cutting and the entity receiving reductions in cost could make some of the initiatives difficult to manage. Further, some of the savings are based on across-the-board cuts to headquarters overhead that are not analytically based. As discussed in chapter 3, the Army has identified a potential to reduce its TDA force by 4,000 active military positions as a result of its initial Force XXI streamlining initiatives. Ongoing streamlining initiatives could further reduce TDA requirements for active military personnel. As a separate initiative, OSD reviewed TAA 2003’s methodology and results, but did not examine the issue of active Army end strength. OSD questioned whether the Army’s 672,000 TOE requirement was high based on its analysis of selected TAA assumptions and model inputs, and its comparison of Army support requirements based on TAA to those used in a 1995 DOD war game known as Nimble Dancer. OSD’s assessment was limited to an analysis of TOE forces, both active and reserve, and did not consider the question of availability of reserve forces during the first 30 days of a conflict, as did the Army’s TAA analysis. Nor did OSD assess another risk factor the Army deemed important, availability of active forces for the second MRC. The OSD study did not recommend a smaller Army, but did ask it to study some issues that affect the size of its TOE force. The Army did not agree that its support force requirements were high. However, at the direction of the Deputy Secretary of Defense, the Army did agree to review model inputs and assumptions that OSD questioned and to determine the impact of any changes on the size of the Army’s support forces. The Army also responded that it would make adjustments to TAA 2003 results if any errors were identified. Among OSD’s principle concerns were the following: Casualty estimates. OSD questioned whether the TAA models produced valid casualty estimates because of variances between Army casualty estimates and actual casualties experienced in battles dating back to World War II. Army casualty estimates are not used to size the Army medical force, but do influence support requirements in the theater of operations such as for quartermaster and engineer branches. Fuel consumption. OSD questioned whether Army fuel consumption rates were high based on a review of actual fuel issued to units during the Gulf War. Host nation support. OSD believed the Army could reduce its active support requirements by placing greater reliance on support from host nations. Currently, the Army reduces its unmet requirements by the amount of host nation support it expects to receive, based on signed agreements. (See chapter 2 for a discussion of material weaknesses in DOD’s host nation support program.) The Army has arranged for an independent analysis of its casualty estimation methodology and has asked the Director of the Joint Staff to query the CINCs concerning the availability of additional host nation support. The Army is conducting its own detailed analysis of its fuel consumption rates. OSD used the 1995 DOD war game Nimble Dancer to evaluate the reasonableness of the Army TOE requirements. By comparing the Nimble Dancer Army force level requirement of 457,000 TOE spaces to the TAA 2003 Army-generated war fight requirement of 672,000 TOE spaces (195,000 combat and 477,000 support forces), OSD identified a potential overstatement of 215,000 spaces. After adjusting for different assumptions used in TAA 2003 and Nimble Dancer, OSD concluded that the Army TAA 2003 requirements were high. While there may be insights to be gained by analyzing some aspects of the Nimble Dancer war game, we believe comparing the Army’s TAA 2003 force requirements against the Nimble Dancer force is problematic. In Nimble Dancer, DOD identified the availability of sufficient support forces as critical to the outcome of the conflict and determined that shortages could delay the start of the counterattack in the second MRC. However, as we noted in our June 1996 report on Nimble Dancer, DOD did not model or analyze in detail the sufficiency of support forces during the war game. For purposes of its baseline modeling, DOD assumed that support forces would accompany combat units when they deployed. Game participants held discussions concerning the impact of support force shortfalls, but deferred further analysis to the Army’s TAA 2003. The 457,000 spaces OSD used as a baseline for comparison to TAA 2003 was a notional Army force based on TAA 2001 and its purpose was to assess mobility, not end strength, requirements. Only the combat forces were played in the war game itself. Given the limited consideration given to support forces in Nimble Dancer, we do not believe comparisons with Army TAA 2003 are meaningful. Although OSD asserts that Army support requirements are high, it endorsed the concept of converting reserve positions from combat to support to fill the Army’s unmet requirements. These conversions were recommended by us in past reports, the Commission on Roles and Missions and most recently in a National Guard Division Redesign Study. In addition to the studies previously mentioned, the Deputy Secretary of Defense directed OSD analysts to assess whether DOD has sufficient mobility assets to move (1) the Army’s full TOE requirement of 672,000, and (2) the force actually planned in the Army’s fiscal year’s 1998-2003 Program Objective Memorandum. In particular, the Deputy Secretary is interested in how scenario timelines would be affected if mobility assets are constrained to those actually planned. During TAA 2003, the Army relied on the Mobility Requirements Study Bottom-Up Review Update to establish available lift to move forces to theater. This was consistent with Secretary of Defense guidance. The National Defense Authorization Act for Fiscal Year 1997 requires DOD to conduct a Quadrennial Defense Review by May 15, 1997. An independent panel of defense experts will submit a comprehensive assessment of DOD’s report and conduct an assessment of alternative force structures by December 1, 1997. In conducting its review, DOD must assess a wide range of issues, including the defense strategy of the United States, the force structure best suited to implement the strategy, the roles and mission of reserve forces, the appropriate ratio of combat forces to support forces, and the effect of OOTWs on force structure. The number of Army divisions or the mix of heavy and light divisions may change as a result of this study, particularly if a new strategy is adopted. For example, a strategy that places more emphasis on OOTWs might result in an active Army that has fewer heavy divisions and assigns a higher percentage of its active forces to support units. The review will also provide an opportunity to reassess the role of the Army’s reserve forces. For example, as a result of the BUR and the Army’s experience in the Persian Gulf War, the Army discontinued its reliance on reserve component “round-up” and “round-out” brigades to bring the active divisions to full combat strength during wartime. However, options may exist to adopt some variant of this concept, such as integrating reserve forces at the battalion level or assigning reserve forces a role in later deploying active divisions. Options to expand the role of the reserves would have the effect of reducing requirements for active combat forces. OSD did not support its plan to reduce the Army’s active end strength with detailed analysis. OSD’s assessment of TAA 2003 identified issues worthy of further analysis, but did not draw conclusions about the size of the active Army. Future active Army end strength will likely be affected by several ongoing Army streamlining initiatives, and potential changes to military strategy and the role of reserve forces resulting from the upcoming Quadrennial Defense Review. TDA streamlining may identify additional opportunities to reduce active TDA personnel by reducing the number of major commands and adopting broader use of workload analysis. Force XXI’s emphasis on digital technology and just in time logistics may result in smaller combat divisions in the future. Other options for restructuring combat forces include reassessing the mix of heavy and light divisions and assigning reserve forces a role in later deploying active divisions. However, given the risks the Army has accepted in its active support forces, we do not believe it is feasible for the Army to reduce its active support forces at this time. In addition to DOD’s official agency comments (see app. V), the Army provided technical comments on a draft of this report concerning the role of reserve forces in any new strategy proposed by the Quadrennial Defense Review. The Army believes that the use of round-up/round-out brigades is a Cold War concept not viable for an early response power projection force. However, the Army says it is currently studying options to employ “multi-component” units, that is, combining an active unit with an associated reserve unit that is organized with fully trained personnel and minimal equipment. Upon mobilization, associate units would deploy and augment the active component unit, or earlier deploying reserve component units, increasing their capability by adding qualified personnel. Our report does not recommend a return to the round-up and round-out concept used in the past. Rather, our intention was to suggest that there may be a variant of this concept that would allow the Army to make greater use of its reserve forces. The Quadrennial Defense Review provides an opportunity for such new concepts to be considered. We have not reviewed the multi-component concept currently being analyzed by the Army, but agree that new approaches that better integrate the Army’s active and reserve forces and optimize the use of available equipment should be explored.
Pursuant to a legislative requirement, GAO reviewed how the Army determines its support force requirements, and the results of its most recent process for allocating support forces, known as Total Army Analysis (TAA) 2003. GAO found that: (1) it does not appear feasible to have a smaller active Army support force at this time, but a smaller active combat force and institutional force may be possible in the future; (2) a smaller active support force today would certainly increase the Army's risk of carrying out current defense policy; (3) current initiatives being explored by the Army regarding its institutional force could lead to greater efficiencies and thus a smaller active force; (4) improvements in the requirements determination process for both support forces and institutional forces could provide greater assurance that the size and composition of the Army is appropriate to meet war-fighting needs; (5) on the basis of TAA 2003 results, the Army believes it can deploy sufficient support forces to meet the requirements of two nearly simultaneous major regional conflicts (MRC) with moderate risk; (6) because it lacks adequate active support forces and must rely on reserve forces that take more time to be readied to deploy, an estimated 79,000 support forces needed in the first 30 days would arrive late; (7) support forces needed for the second conflict would consist of only 12 percent active forces; (8) high reliance on reserves for use in the second MRC may entail risk if the second MRC occurs without warning, or if mobilization is delayed; (9) existing active support units are short another 19,200 required positions and some required support units exist only on paper; (10) TAA 2003 had some limitations and the Army's risk assessment depends largely on the assumptions and model inputs that were adopted for TAA 2003; (11) the Army used many favorable assumptions that, although consistent with defense guidance, understated risk; (12) the Army's recent efforts to streamline the institutional active Army by identifying better ways to organize and adopt more efficient business practices have identified up to 4,000 military positions that the Army plans to use to offset active support shortfalls; (13) the Army may reduce the number of major commands, which could result in some additional force savings in the future; (14) however, the Army's efforts to make its institutional force more efficient and potentially smaller are hampered by long-standing weaknesses in its process to determine institutional force requirements; (15) GAO's analysis indicates that the Department of Defense (DOD) has not supported its proposal to reduce the active Army to 475,000 by 1999 with sound analysis; and (16) DOD has an opportunity to explore these and other alternatives during its Quadrennial Defense Review.
To be eligible for the Job Corps program, an individual must generally be 16 to 24 years old at the time of enrollment; be low income; and have an additional barrier to education and employment, such as being homeless, a school dropout, or in foster care. Once enrolled in the program, youth are assigned to a specific Job Corps center, usually one located nearest their home and which offers a job training program of interest. The vast majority of students live at Job Corps centers in a residential setting, while the remaining students commute daily from their homes to their respective centers. This residential structure is unique among federal youth programs and enables Job Corps to provide a comprehensive array of services, including housing, meals, clothing, academic instruction, and job training. ETA administers Job Corps’ 125 centers through its national Office of Job Corps under the leadership of a national director and a field network of six regional offices located in Atlanta, Boston, Chicago, Dallas, Philadelphia, and San Francisco. Job Corps is operated primarily through contracts, which according to ETA officials, is unique among ETA’s employment and training programs (other such programs are generally operated through grants to states). Among the 125 centers, 99 are operated under contracts with large and small businesses, nonprofit organizations, and Native American tribes. The remaining 26 centers (called Civilian Conservation Centers) are operated by the U.S. Department of Agriculture’s (USDA) Forest Service through an interagency agreement with DOL. Job Corps center contractors and the USDA Forest Service employ center staff who provide program services to students. According to ETA officials, the primary responsibility for ensuring safety and security at Job Corps centers resides with center operators. Also, according to ETA officials, the Office of Job Corps has oversight and monitoring responsibility to ensure that contract operators are in full compliance with their contract and that both contract centers and USDA-operated Civilian Conservation Centers follow Job Corps’ Policy and Requirements Handbook. In September 2015, as part of its overall effort to improve safety and security for students, ETA established the Division of Regional Operations and Program Integrity within the national Office of Job Corps. This division is responsible for coordinating regional operations and activities, including efforts to strengthen communications between the national and regional offices, strengthen quality assurance, and promote continuous improvement. The division is also responsible for reviewing the results of all risk management data, center safety and culture assessments, and responses to safety and security deficiencies at individual centers. For example, this division is to monitor the safety and security of Job Corps centers through ongoing oversight by regional offices, including daily monitoring of SIRS data. Job Corps’ Policy and Requirements Handbook requires centers to report certain significant incidents to the national Office of Job Corps and to regional offices in SIRS within 6 or 24 hours of becoming aware of them, depending on the incident. Specifically, centers are required to report numerous categories of incidents, including deaths, assaults, alcohol and drug-related incidents, serious illnesses and injuries, and hospitalizations (see appendix I for definitions of these categories of incidents). Centers must report incidents involving both Job Corps students and staff, and incidents that occur onsite at centers as well as those that occur at offsite locations. Offsite incidents include those that occur while students are participating in program-related activities, such as off-center training and field trips. Offsite incidents also include those that occur while students are not participating in program-related activities, such as when they are at home during breaks. In some cases, the incident categories in SIRS are related to the specific infractions defined in the Policy and Requirements Handbook, which are classified according to their level of severity. Level I infractions are the most serious, and include such infractions as arrest for a felony or violent misdemeanor or possession of a weapon, and are required to be reported in SIRS. Level II infractions include such infractions as possession of a potentially dangerous item like a box cutter, or arrest for a non-violent misdemeanor. The majority of these infractions are required to be reported in SIRS. Minor infractions—the lowest level of infractions— include failure to follow center rules, and are not required to be reported in SIRS. Within the Policy and Requirements Handbook, ETA establishes a Zero Tolerance Policy, which specifies actions that centers must take in response to certain incidents. ETA implemented changes to this policy effective on July 1, 2016, which impacted the categorization and number of reportable incidents. Under the prior Zero Tolerance Policy, there were fewer infractions categorized as Level I, which are the most severe and result in termination from the program. The July 2016 policy changes broadened the types of infractions categorized as Level I. For example, ETA elevated several infractions previously classified as Level II to Level I, and added several new categories of reportable incidents. According to ETA officials, they made these changes to reflect a heightened emphasis on student safety. ETA currently surveys all students enrolled in Job Corps in March and September each year to collect information on a variety of topics, including their perceptions of safety at Job Corps centers. The current student survey contains 49 questions on various aspects of the Job Corps program, including career development services, interactions between students and staff, access to alcohol and drugs, and overall satisfaction with the program. The survey includes 12 questions on students’ perceptions of safety at centers. ETA has been conducting this survey since 2002, and in recent years has administered it twice a year. ETA officials told us they plan to survey students more frequently beginning in July 2017. Specifically, they plan to survey students on a monthly basis regarding their perceptions of safety, and on a quarterly basis regarding their overall satisfaction with the program. ETA uses the responses to the safety-related survey questions to calculate a center safety rating, which represents the percentage of Job Corps students who report feeling safe at each center, as well as a national safety rating, which represents the percentage of Job Corps students who report feeling safe nationwide. Our preliminary analysis of ETA’s SIRS data shows that Job Corps centers reported 49,836 safety and security incidents, including those that occurred both onsite and offsite, from January 1, 2007 through June 30, 2016. During this time period, approximately 539,000 students were enrolled in the program, according to ETA officials. Three types of incidents represented 60 percent of all reported incidents: serious illnesses or injuries (28 percent), assaults (19 percent), and drug- related incidents (13 percent). The remaining 40 percent of reported incidents included theft or damage to center, staff, or student property (12 percent), breaches of security or safety (6 percent), and all other types of incidents (22 percent). During this time period, Job Corps centers reported 265 deaths, including 61 deaths that occurred onsite and 204 that occurred offsite. Most of these reported deaths were homicides (25 percent), due to medical causes (23 percent), and due to accidental causes (22 percent). In figure 1 below, 246 of these deaths are captured in the “Other” category, and 19 of these deaths are captured in the “Assault” category. Our preliminary analysis showed that from January 1, 2007 through June 30, 2016, 76 percent of the reported safety and security incidents occurred onsite at Job Corps centers, and 24 percent occurred at offsite locations (see fig.2). While most reported incidents occurred onsite, our preliminary analysis showed that the majority of reported deaths occurred offsite. During this time period, of the 265 reported deaths, 77 percent occurred offsite, and 23 percent occurred onsite. The vast majority of homicides reported during this time period occurred offsite, and very few occurred onsite. Of 65 reported homicides, 61 occurred at offsite locations and 4 occurred onsite. During this time period, the most common types of reported onsite incidents were generally different from the most common types of reported offsite incidents, although reported assaults were common in both locations. The most common types of reported onsite incidents were the same as the most common types of incidents overall: serious illnesses or injuries (33 percent), assaults (20 percent), and drug-related incidents (16 percent). Of all reported offsite incidents, the most common types were thefts or damage to center, staff, or student property (23 percent), motor vehicle accidents (15 percent), assaults (14 percent), and serious illnesses or injuries (14 percent) (see fig.3). Our preliminary analysis showed that from January 1, 2007 through June 30, 2016, most reported violent incidents—specifically assaults, homicides, and sexual assaults that occurred both onsite and offsite— involved Job Corps students, and considerably fewer of these incidents involved program staff. During this time period, Job Corps centers reported 10,531 violent incidents, which represented 21 percent of all reported onsite and offsite incidents. Students were victims in 72 percent of these reported violent incidents, while staff were victims in 8 percent of these incidents. Similarly, students were perpetrators in 85 percent of these reported violent incidents, while staff were perpetrators in 1 percent of these incidents (see table 1). Each of these reported violent incidents involved at least one victim or perpetrator who was a Job Corps student or staff member, but some of these incidents also involved victims or perpetrators who were not associated with the Job Corps program. Our preliminary analysis of ETA’s student satisfaction survey data from March 2007 to March 2017 showed that while students generally reported feeling safe at Job Corps centers, they reported feeling less safe on certain safety and security issues. Overall, across all 12 of the safety- related survey questions, an average of 72 percent of students reported feeling safe during this time period. However, the average percentage of students who reported feeling safe on each individual survey question ranged from 44 percent to 91 percent. For 7 of the 12 questions, student responses were above the 72 percent average, which indicates students felt more safe; however, for 5 of the questions, student responses were below the average, which indicates students felt less safe (see table 2). For example, an average of 44 percent of students reported that they had never heard students threaten each other, or had not heard such threats within the last month. The remaining 56 percent of students, on average, reported hearing such threats at least once in the last month. ETA uses students’ responses to the safety-related survey questions to calculate a safety rating for each Job Corps center and a national safety rating for the program overall. According to ETA officials, the center safety rating represents the percentage of students who report feeling safe at a center, and the national safety rating represents the percentage of students who report feeling safe nationwide. Throughout the period of March 2007 through March 2017, the national safety rating remained above 82 percent, according to ETA data. ETA officials said they use these ratings as management tools to assess students’ perceptions of safety at individual centers and nationwide, and to determine whether ETA needs to act upon these results to better address students’ safety and security concerns. Chairwoman Foxx, Ranking Member Scott, and Members of the Committee, this concludes my prepared remarks. I look forward to answering any questions you may have. For further information regarding this testimony, please contact Cindy Brown Barnes at (202) 512-7215 or brownbarnesc@gao.gov. Contact points of our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Mary Crenshaw (Assistant Director), Caitlin Croake (Analyst in Charge), David Chrisinger, Alexander Galuten, LaToya Jeanita King, Rebecca Kuhlmann Taylor, Grant Mallie, Sheila McCoy, Meredith Moore, Mimi Nguyen, Lorin Obler, Matthew Saradjian, Monica Savoy, Almeta Spencer, Amy Sweet, Walter Vance, Kathleen van Gelder, and Ashanta Williams. Appendix I. Categories of Incidents in the Significant Incident Reporting System (SIRS) ETA’s Definition An incident involving the discovery of alcohol on center, or involving any student found in possession of alcohol or charged by local law enforcement agencies with illegal alcohol consumption or possession. Incidents which require medical treatment due to the physical effects of drug use (alcohol poisoning, etc.) should be reported under the “Medical Incident” Primary Incident Code. This code applies when a student is arrested for an incident that occurred prior to his/her enrollment in Job Corps. These are acts that are commonly known as assault, battery, or mugging; any assault with a weapon or object; or any altercation resulting in medical treatment for injuries. Mugging (robbery) is included in this category because it pertains more to an assault upon a person than on property. Homicide has been removed as a Primary Incident Code and is now listed under Assault as a Secondary Incident Code. This code applies to any incidents that threaten the security and safety of center students, staff, and property which may result in injury, illness, fatality, and/or property damage. Examples include arson, bomb threat, gang-related incidents, possession of gun, possession of an illegal weapon, unauthorized access to center buildings, grounds, or restricted areas, and verbal threats. Attempted suicide is a deliberate action by student to self-inflict bodily harm in an attempt to kill one’s self. Centers need only report a suicide threat (suicidal ideation) if it results in evaluation by a physician or mental health consultant. Centers must report the death of any student who is enrolled in Job Corps regardless of his/her duty status. Centers are only required to report the death of a staff member if the death occurs while on duty, either on center or off center. Incidents involving any student or staff found in possession of or charged by local law enforcement agencies with a drug offense (e.g. the illegal use, possession, or distribution of a controlled substance), or the discovery of drugs on center. Incidents which require medical treatment due to the physical effects of drug use (overdose, etc.) should be reported under the “Medical Incident” Primary Incident Code. ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. Sexual misconduct includes the intentional touching, mauling, or feeling of the body or private parts of any person without the consent of that person. Sexual harassment or unsolicited offensive behavior such as unwelcome sexual advances, requests for sexual favors, and other verbal or physical contact of a sexual nature is also included. ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. ETA’s Definition ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. ETA’s Significant Incident Reporting System (SIRS) Technical Guide does not provide a definition of this category. Motor vehicle accidents involving any Job Corps student, on duty staff member, and/or center- owned vehicle should be reported using this code. Incidents in which a pedestrian is struck by a motor vehicle should be reported under the “Medical Incident” Primary Incident Code. Safety/Hazmat are incidents involving hazardous materials/chemicals in any solid, liquid, or gas form that can cause harm to humans, plants, animals, property, or the environment. A hazardous material can be radiological, explosive, toxic, corrosive, biohazard, an oxidizer, an asphyxiant or have other characteristics that render it hazardous in specific circumstances. Hazmat/toxic-mercury, gasoline, asbestos, lead, used syringe, blood Hazmat/non-toxic-water, oxygen (can become hazardous under specific circumstances) Medical incidents include any diagnosis of injury, illness, or disease which is serious or widespread among students and/or staff, (e.g. communicable disease outbreak, reaction to medication/immunization, emergency surgery, hospitalization, emergency room treatment, etc.). Incidents which require medical treatment due to the physical effects of drug and/or alcohol use (drug overdose, alcohol poisoning, etc.) should be included in this category. Sexual assault includes any alleged non-consenting sexual act involving forceful physical contact including attempted rape, rape, sodomy, and others. If forceful physical contact is not used, the incident should be reported as a Sexual Misconduct. Property incidents are any incident by students or staff that involve the destruction, theft, or attempted theft of property; this includes but is not limited to automobile theft, burglary, vandalism, and shoplifting. If any type of force is used against another person, the incident is to be reported under the “Assault” Primary Incident Code. Property incidents also include natural occurrences/ disasters or any other incident threatening to close down the center or disrupting the center’s operation (e.g. hurricane, flooding, earthquake, water main break, power failure, fire, etc.). These incident categories were added to SIRS in June 2016. Some of these new categories previously existed in SIRS, but were renamed in June 2016. Others were entirely new categories as of June 2016. Centers were not required to officially report data in these new categories until July 1, 2016. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The deaths of two Job Corps students in 2015 raised concerns about the safety and security of students in this program. The Job Corps program serves approximately 50,000 students each year at 125 centers nationwide. Multiple DOL Office of Inspector General (OIG) audits have found deficiencies in the Office of Job Corps' efforts to oversee student safety. ETA and the Office of Job Corps have taken steps to address these concerns, but in March 2017, the DOL OIG raised new safety and security concerns, including some underreporting of incident data, and made related recommendations. This testimony is based on GAO's ongoing work on these issues and provides preliminary observations on (1) the number and types of reported safety and security incidents involving Job Corps students, and (2) student perceptions of safety at Job Corps centers. GAO analyzed ETA's reported incident data from January 1, 2007 through June 30, 2016. GAO's preliminary analysis summarizes reported incidents in the aggregate over this time period but the actual number is likely greater. GAO also analyzed student survey data from March 2007 through March 2017, reviewed relevant documentation, and interviewed ETA officials and DOL OIG officials. GAO's preliminary analysis of the Department of Labor's (DOL) Employment and Training Administration's (ETA) incident data found that Job Corps centers reported 49,836 safety and security incidents of various types that occurred both onsite and offsite between January 1, 2007 and June 30, 2016. During this time period, approximately 539,000 students were enrolled in the program, according to ETA officials. ETA's Office of Job Corps is responsible for administering the Job Corps program, which is the nation's largest residential, educational, and career and technical training program for low-income youth generally between the ages of 16 and 24. As shown in the figure, the three most common types of reported incidents were serious illnesses or injuries, assaults, and drug-related incidents. More than three-quarters of the reported incidents occurred onsite at Job Corps centers, and the rest occurred offsite. Most reported violent incidents—specifically assaults, homicides, and sexual assaults that occurred onsite and offsite—involved Job Corps students. For example, students were victims in 72 percent of these reported incidents, while staff were victims in 8 percent, and the remaining incidents involved victims who were not associated with Job Corps. GAO's preliminary analysis of ETA's student survey data from March 2007 through March 2017 found that students generally reported feeling safe, but they reported feeling less safe with respect to certain issues. The student survey contains 49 questions about students' experiences in the Job Corps program, including 12 questions related to safety at centers. Across all 12 of these safety-related survey questions, an average of 72 percent of students reported feeling safe over this 10-year period. However, the average percentage of students who reported feeling safe on each individual survey question ranged from 44 percent to 91 percent. For example, an average of 44 percent of students reported that they had never heard students threaten each other, or had not heard such threats within the last month. The remaining 56 percent of students, on average, reported hearing such threats at least once in the last month. GAO is not making recommendations in this testimony but will consider recommendations, as appropriate, when ongoing work is finished. GAO incorporated comments from ETA as appropriate.
DOE manages the largest laboratory system of its kind in the world. Since the early days of the World War II Manhattan Project, DOE’s laboratories have played a major role in maintaining U.S. leadership in research and development. DOE is responsible for ensuring that the laboratory system—with 22 laboratories in 14 states, a combined budget of over $10 billion a year, and a staff of about 60,000—is managed in an effective, efficient, and economical manner. DOE contracts with educational institutions and private sector organizations for the management and operation of 18 of its laboratories. (App. I lists DOE’s national laboratories.) The remaining four laboratories are staffed by federal employees. DOE pays its laboratory contractors all allowable costs. DOE can also pay contractors a separate fee, or profit, as compensation for operating the laboratories. Fees are based on the contract value and the technical complexity of the work to be performed at a laboratory, but also on the degree of financial liability or risk that a contractor is willing to assume. Under performance-based contracting principles, fees can include both a fixed amount and an amount that is linked to achieving performance objectives. One of DOE’s major goals in performance-based contracting is to develop performance objectives for each contractor that are specific, results-oriented, measurable, and reflect the most critical activities. DOE’s implementation of performance-based contracting for its laboratories is in a state of transition. While most of its laboratory contracts contain some performance-based features, the contracts negotiated by DOE vary from contract to contract. For example, DOE is incorporating performance-based features in all of its laboratory contracts, although measures vary substantially in number, ranging from a low of 7 in one laboratory contract to about 250 in another. Also, DOE has negotiated performance fees in only 9 of its 18 laboratory contracts because the remaining laboratories are still operating under DOE’s traditional approach in which fees are not linked to performance. We found that similar laboratories managed by similar contractors have different contracts. The wide diversity of contract features reflects DOE’s philosophy of relying on DOE field units to tailor contracts to local conditions and contractors’ preferences. Since introducing performance-based contracting in 1994, DOE and its laboratory contractors have struggled to find the right mix of measures that accurately and reliably capture the contractors’ performance. According to DOE field staff, in the early years of contract reform, DOE encouraged its field units to construct as many measures as they could, but provided limited guidance on how to accomplish this task. As a result, early attempts led to large numbers of performance measures. A large number of measures diminishes the importance of any single measure, whereas a small number results in measures that are too broad to be meaningful. For example, a DOE field official told us, “The original guidance from DOE Headquarters was to [develop performance measures] as much as possible. Unfortunately, there was inadequate guidance on how to do this. . . . The number of performance measures . . . is too large. However, if we fail to cover an activity [with a measure] the contractor may not give the attention needed to the activity.” DOE and its laboratories are still attempting to develop the right number of measures. For example, we found that the number of performance measures in the laboratory contracts we examined ranged from a low of 7 measures at the Ames Laboratory in Iowa to about 250 at the Idaho National Engineering and Environmental Laboratory in Idaho. DOE and its contractors are also working to develop measures that reliably address the most important activities of the laboratories. According to a field official, DOE’s early attempts at developing performance measures resulted in contractors focusing only on those activities that were tied to performance fees, while neglecting other important activities. Another DOE site official stated, “erformance-based contracting tends to focus too much on the monetary reward . . . and less on an analysis of performance. The incentive at the labs should be good science, not more dollars.” Developing the right number and type of performance measures is an evolving process between DOE and its contractors. Most DOE and contractor representatives told us that they are making progress in finding measures that accurately and reliably reflect performance, particularly in management and operations activities. Measuring a contractor’s performance in science and technology is more difficult. Science and technology measures are broader in scope and typically rely on peer reviews and a contractor’s self-assessment for evaluating performance. Although performance fees are a major feature of performance-based contracting, only 9 of the Department’s 18 laboratory contracts have them. Nine of the remaining laboratory contracts operate under DOE’s traditional fixed-fee arrangement, and one laboratory contract has no fee. Fixed fees are earned regardless of performance and were commonly used before DOE adopted performance-based contracting as its normal business practice. Appendixes I and II summarize laboratory fee arrangements and illustrate the wide variety of fee arrangements in use. In commenting on a draft of this report, DOE said that by the end of calendar year 1999, the majority of laboratory contracts that provide fees will have performance-based fee structures. Performance fees were introduced as a way of encouraging superior performance and can include an incentive and an award fee. An incentive fee is usually applied to activities for which progress can be accurately measured, for example, cleaning up 40 barrels of toxic waste within a prescribed period of time. An award fee is usually applied to tasks that are harder to measure and require a more subjective judgment of performance, for example, assessing a contractor’s attention to community relations. Performance fees represent the amount of a contractor’s total fee placed “at risk” since the fee that could be earned is determined by how well the contractor performs. As the following examples show, some laboratory contracts include both types of performance fees, while others rely solely on an incentive fee or an award fee. Still others have neither and use only fixed fees. At the Sandia National Laboratories in New Mexico and the Oak Ridge National Laboratory in Tennessee, DOE negotiated fixed-fee contracts. Both of these laboratories are operated by subsidiaries of the Lockheed Martin Corporation—a for-profit company. DOE officials told us they were confident that incentive fees were not needed for these laboratories because the existing Lockheed Martin contractors’ performance is superior and introducing incentive fees might distract the contractors from performing all essential work. At the Idaho National Engineering and Environmental Laboratory in Idaho, operated by Lockheed Martin Idaho Technologies Company, DOE uses a combination of fixed, incentive, and award fees. DOE officials told us that incentive fees were used because of the many different tasks that could be identified and measured, but that award fees were also needed to assess activities that required more subjective judgments. At the Stanford Linear Accelerator Center in California, operated by Stanford University, DOE negotiated a no-fee contract, the only such arrangement in the laboratory system. According to DOE, the laboratory contractor does not want a fee for operating this laboratory because a fee would not motivate performance and may be a detriment to the conduct of outstanding science, which is the primary mission of this laboratory. The Lawrence Berkeley National Laboratory and Lawrence Livermore National Laboratory in California and the Los Alamos National Laboratory in New Mexico are operated by the University of California. The contracts contain a fixed fee and an incentive fee for meeting expectations, plus another amount for exceeding expectations. A senior DOE official acknowledged the variability in laboratory contracts but said that imposing uniform practices throughout the laboratory system would not necessarily improve the overall performance and accountability of the contractors. According to DOE and laboratory officials, there are several reasons for the variability in the contracts. First, the laboratories engage in different activities with different levels of technical complexities. Second, some contractors are willing to assume greater financial risk or liability and thus expect a higher or different fee arrangement. Finally, DOE field officials who negotiate the contracts employ features that they believe are best suited for their particular circumstances. However, we found that similar laboratories operated by similar contractors have different fee arrangements. For example, both the Lawrence Berkeley and Argonne national laboratories have similar research missions and are both managed by university contractors. However, Lawrence Berkeley’s contractor, the University of California, works under a fixed-fee plus performance fee arrangement, while Argonne’s contractor, the University of Chicago, works under a performance fee arrangement only. We also found substantial variations in contracting philosophy among DOE field officials. DOE relies on field units to negotiate its contracts, including whether to use performance-based fees, and how performance objectives and measures will be accomplished. Some of these officials told us that performance fees are important motivators, while others said performance fees can distract the contractor from other important work. In commenting on a draft of this report, DOE provided us with additional reasons for the variability in contracts, including the timing of when contractors first converted to performance-based contracting, the nature of the proposals received in competitive awards, and the negotiated terms in contract extensions. In addition, DOE cited other motivations for laboratory contractors, such as their reputations in the scientific community and contract extensions. DOE’s guidance states that the purpose of performance-based contracting is to obtain better performance or lower costs or both. DOE has not analyzed the impact of performance-based contracting on its laboratory contractors. As a result, it has not determined whether performance-based contracting is achieving the intended objectives of reducing costs and improving performance. DOE officials told us that the amounts of fees paid to laboratory contractors have generally increased with the implementation of performance-based contracting but that it is difficult to determine the return on this investment since contractors are also assuming more risk or liability for costs previously paid by DOE. Increased liabilities include costs due to a failure to exercise prudent business judgment on the part of the contractor’s managerial personnel. DOE has not analyzed the relative costs and benefits to the government of using higher fees in performance-based contracts. We previously recommended that DOE ensure that the fees paid to contractors for incurring increased financial risks are cost-effective by developing criteria for measuring the costs and benefits to the government of this approach. DOE officials told us that while they have not conducted a comprehensive cost-benefit analysis of fees, they try to negotiate fees that make sense for individual contracts, taking into account the financial risks and incentives needed to motivate performance. Without such an overall analysis, however, it is difficult to determine the value to the government of the over $100 million spent on contractor fees for fiscal year 1998. Although DOE has not assessed the impact of performance-based contracting, limited reviews have found both progress and problems, as these examples show: Since 1997, DOE’s Office of Inspector General has issued three reports on problems the Department had in implementing performance-based incentives at three facilities (one of which was a laboratory). Problems reported by the Inspector General included contracts with poorly developed performance measures and fees that were paid to contractors before agreement was reached on the performance incentives. In 1997, DOE’s Office of Procurement issued a report on the use of performance-based incentives. The report noted that the use of incentives has been effective in directing contractors’ attention to performance outcomes and has improved communications concerning performance expectations. The report also noted that DOE field units are improving the quality of their contracts. However, the report pointed out that implementation was sometimes inconsistent and that performance objectives sometimes were overly focused on process milestones rather than on outcomes. DOE’s laboratories were not the focus of this review, however. Our July 1998 report on DOE’s performance-based incentive contracts noted that the Department had taken steps to correct many of the problems cited in the Inspector General’s reports, including issuing guidance, conducting training, and incorporating lessons learned into fiscal year 1998 contract incentives. We noted that although DOE maintained that its performance-based incentives have been effective in achieving the desired end results, it had not been clear whether these successes were due to performance-based incentives or to an increased emphasis on program management. None of these assessments focused exclusively on laboratory contracts. In our discussions, DOE field staff generally credited performance-based contracting with improving their ability to set expectations for the Department’s laboratories, and several laboratory contractors concurred that this was a benefit. In addition, both DOE and laboratory officials cited improved communication as a benefit of performance-based contracting. Laboratory contractors also credited DOE for focusing its oversight on evaluating results and away from dwelling on strict compliance with DOE’s rules and regulations. In addition, contractors told us they have increased productivity and lowered costs, especially for the support and overhead functions. However, most of these officials also said that these advances were more the result of other initiatives, such as internal streamlining actions, than of performance-based contracting. DOE and its laboratory contractors told us that they are committed to making performance-based contracting work effectively and that the contracts are including more specific and reliable performance measures. However, since DOE has not evaluated the impact of performance-based contracting on its laboratories—owing in part to the wide variance in fee arrangements—there is limited evidence on how performance fees ensure a high level of performance by contractors at lower cost. As a result, DOE cannot show how the higher fees it is paying to contractors under performance-based contracting are of value to the government and to the taxpayers. We previously recommended that the Secretary of Energy ensure that the fees paid to contractors for incurring increased financial risk are cost-effective by developing criteria for measuring the costs and benefits to the government of this approach. DOE did not implement our recommendation and has no plans to measure the overall costs and benefits of performance-based contracting for its laboratories. DOE officials maintain that performance-based contracting is working, but this is based on anecdotal evidence. Moreover, the fees DOE negotiates are based on its best judgment of what is needed to motivate contractors and to compensate them for increased risk, but DOE’s evidence is based primarily on non-laboratory contractors, and DOE has not quantified the value of the increased risk assumed by contractors under performance-based conditions. Because DOE does not know whether performance-based contracting is improving performance at lower cost at its national laboratories and because our previous recommendation to develop criteria for measuring the costs and benefits of paying fees to contractors for incurring increased financial risk was not implemented, we recommend that the Secretary of Energy evaluate the costs and benefits from using performance-based contracting at the national laboratories. While we recognize that each laboratory contract is individually negotiated, DOE should nevertheless ensure that the fees it provides to motivate contractors and to compensate them for increased financial risk is based on an analysis of costs and benefits. The need for this type of evaluation is consistent with the principles of the Government Performance and Results Act of 1993 that require agencies to measure outcomes against their goals. We provided a draft of this report to DOE for review and comment. DOE disagreed with our conclusion on the need for determining the costs and benefits of the fees it has negotiated with its laboratory contractors. DOE noted that its performance-based contracting experience is in transition but that its evaluations show that performance-based contracting is working. We acknowledge in our report that DOE’s evaluations of performance-based contracting show promise, but we also point out that these evaluations did not focus on the laboratories’ experiences with performance-based contracting. Because of this limitation and because of the higher fees being negotiated with the laboratories, we continue to believe it is desirable for DOE to determine if its performance-based contracting is improving performance at lower cost. DOE also commented that the variability we found in performance-based laboratory contracts reflects many different factors, including differences in the scope of work, the type of contractor, and the experiences the laboratories have with performance-based contracting features. Our report described the reasons for the variability in laboratory contracts, and we have included the additional reasons provided in DOE’s comments. We also agree that DOE’s use of performance-based contracting is evolving and that the variability we found in laboratory contracts (principally in performance measures and fee arrangements) is in part due to an ongoing learning process associated with the transition to performance-based contracting. DOE also raised a number of issues regarding the use of fees in its laboratory contracts and strongly defended its use of performance fees. We agree with many of DOE’s observations on the use of performance fees, and we are not suggesting that DOE should abandon its performance-based approach or that it should eliminate performance-based fees in its laboratory contracts. It is also not our intent to show that performance-based contracting should be abandoned if its impacts on the laboratories cannot be measured. We do believe, however, that effective implementation of performance-based contracting provisions is dependent on the ability to support the fee amounts paid through a cost and benefit analysis. DOE also provided a number of clarifications that we have incorporated in our report as appropriate. Appendix III includes the full text of DOE’s comments and our response. Our review was performed from September 1998 through April 1999 in accordance with generally accepted government auditing standards. See appendix IV for a description of our scope and methodology. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to Bill Richardson, Secretary of Energy, and Jacob J. Lew, Director, Office of Management and Budget. We will make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report were Gary R. Boss and Tom Kingham. Contract amount (millions) (Table notes on next page) The contract and fee amounts shown are for the entire Savannah River Site, including the Savannah River Technology Center. Incentive-fee contract. DOE plans to extend this contract 5 years, but is renegotiating to make it consistent with the federal acquisition regulations format and to incorporate all contract reform features, including performance-based provisions. Fixed-fee contract. The new contract was signed in January 1998 with a fixed fee through September 1998. DOE is still negotiating the contract for fiscal year 1999. DOE plans to negotiate a performance fee. Incentive-fee contract. DOE plans to recompete this contract in fiscal year 1999. The current contractor, Lockheed Martin, announced it will not bid. Incentive-fee contract. The amount of the annual available fee remains the same for each year of the 5-year contract. Incentive-fee contract. The amount of the annual available fee remains the same for each year of the 5-year contract. Incentive-fee contract. The amount of the annual available fee remains the same for each year of the 5-year contract. Incentive-fee contract. DOE converted this contract from a fixed-fee to an incentive-fee type and made available $7.1 million in potential fees geared to incentives in four areas—science and technology excellence, operational excellence, leadership and management, and community relations. Sandia National Laboratories/Sandia Corp. (Lockheed Martin) Fixed-fee contract. The contract that expired in September 1998 was renegotiated and extended noncompetitively for 5 years. The new contract remains a fixed-fee arrangement but now includes performance objectives, measures, and criteria. DOE decided that the contractor’s superior performance could be sustained with a fixed fee. Incentive-fee contract. DOE is renegotiating this contract and plans to extend noncompetitively for 5 years. DOE plans to make the contract consistent with the federal acquisition regulations format and to incorporate all contract reform conditions, including performance-based provisions. Fermi National Accelerator Laboratory/University Research Associates, Inc. Fixed-fee contract. DOE has not announced whether it will recompete or extend this contract. DOE rates the contractor’s performance as outstanding. Award-fee contract. DOE recompeted this contract in 1998. The new contract was effective on Oct. 1, 1998, and is fixed fee until March 1999, at which time DOE intends to includes an award fee for the remainder of the contract period. Fixed-fee contract. The contractor did not want any fee, but DOE negotiated a small fee of $10,000. (continued) No-fee contract. The contract term ended on March 31, 1998, and was extended noncompetitively on a month-by-month basis during negotiations to incorporate performance-based incentives. The contract was then extended noncompetitively for 5 years in January 1999. The contract includes performance measures and expectations, but no fee. No fee contract (with management allowance). The contract is currently being renegotiated so that it can be extended noncompetitively for 5 years. DOE plans the new contract to be a fixed-fee arrangement. Objectives of the negotiations are to structure the contract to be consistent with the federal acquisition regulations format and to incorporate all contract reform conditions, including performance-based features. Fixed-fee contract. The contract was recompeted in 1998. The new contractor was selected (the Bechtel Group) but the incumbent contractor, Westinghouse Electric Corporation, protested the award. The existing contract extended non-competitively pending the result of a bid protest to GAO. The bid protest was denied by GAO. The new contract with Bechtel was effective February 1, 1999. Knolls Atomic Power Laboratory/KAPL, Inc. (Lockheed Martin) Fixed-fee contract. Savannah River Technology Center/Westinghouse Savannah River Co. Incentive-fee contract. The following are GAO’s comments on the Department of Energy’s letter dated April 22, 1999. 1. We have made changes to the report as appropriate in response to DOE’s comments. 2. Our wording is drawn from DOE’s guidance on performance-based contracting, and we have made changes to our report to reflect DOE’s comments. DOE recommends that its laboratory contracts contain performance-based features, which include clear expectations described in terms of results, not how the work is to be accomplished. 3. As we stated in our report, DOE’s evaluations did not focus on the laboratory contractors, nor did these evaluations focus on the costs and benefits of performance-based contracting features, including the impact of fees. 4. We recognize that one of the purposes of providing fees is to reflect the financial risk associated with work performance, and we make this point in our report. Our 1994 recommendation questioned the cost-benefit of the increased fees, regardless of whether they were related to performance or financial risk. We continue to believe that our recommendation is relevant because DOE has not evaluated the cost and benefit of the fees it is providing to laboratory contractors. 5. We believe our wording adequately reflects the conditions discussed. Information on the laboratory fees and total contract costs is presented in appendix I. 6. We have made changes to the report as appropriate in response to DOE’s comments on contract type. We stated in our report that DOE’s performance-based contracting is in a state of transition. We also stated that there are wide variations in performance measures and fee arrangements negotiated by DOE and its laboratory contractors. This material is presented as facts describing the conditions that presently exist. Our report also describes the reasons for the variability in laboratory contracts and includes most of the reasons given in DOE’s comments. We have made changes in the report to reflect these additional reasons for the variability in DOE’s laboratory contracts. 7. Our statement that contract differences are the product of DOE’s relying on its field units to tailor contracts to local conditions is based on interviews with numerous DOE field officials. This statement is not an implied criticism of how DOE negotiates contracts. Also, we disagree with DOE’s characterization that contractors’ preferences are “generally irrelevant” when accounting for the variations that exist among laboratory contractors. As DOE noted, contractors’ preferences are reflected in the negotiation process. In our discussions with DOE field officials responsible for negotiating contracts, laboratory contractors’ preferences on fees were cited as a critical factor in determining fee structures. 8. Our report recognizes that developing the optimum number of performance measures is a challenge, as reflected in the wide range of performance measures in use even among similar laboratories. We are not suggesting that any two contractors should have the same measures or the same number of measures. Our point is that DOE continues to struggle with finding the right number of measures. To further illustrate, the University of California’s fiscal year 1998 contracts for its two weapons laboratories—Lawrence Livermore and Los Alamos—contain 83 and 120 performance measures, respectively, even though these laboratories are very similar in budget and scope. They are, however, managed by different DOE field units. 9. Our purpose in including comments we received from DOE field units is to illustrate the wide differences in philosophy about the use of fees to motivate laboratory contractors. Several DOE field staff, as well as contractors, told us that they strongly believe that providing fees does not motivate contractors, including both for-profit and not-for-profit contractors. Moreover, our statement that performance-based contracting has tended to focus in some instances on monetary rewards at the expense of good science was a frequent comment from both DOE field officials and laboratory contractors. Thus, it is very important to identify the need for monetary incentives where they are appropriate. Other motivations that DOE cited for laboratory contractors, such as their reputations in the scientific community and desire for contract extensions, were added to our report. These differences in philosophy account for some of the variation in contracts. 10. Our report reflects information provided directly from DOE field staff, who we were advised by DOE headquarters were the proper source for this information. The data in DOE’s comments are reflected in the appendixes to our report. We have also revised our report to show that there are now 18 laboratory contractors, reflecting a recent change in how DOE defines its laboratories. 11. DOE field officials told us that performance fees are used to encourage superior performance. Asserting that fees are used to link performance to financial reward is self-evident in this context. 12. We agree with DOE that no single approach in contracting has proven to be optimum, and we reflected this view in our report. Regarding the wide variability in fee arrangements, we stated that there was very little consistency among the contracts of similar laboratory contractors conducting similar work. We also stated that local conditions influence the variability in laboratory contracts. 13. Our wording was taken from DOE’s guidance on performance-based contracting. As we state in our report, prior assessments of performance-based contracting have not focused on laboratory contractors. We also stated in our report that DOE believes that the results from its assessments of performance-based contracting have been positive. We believe it is a logical and desirable step for DOE to determine whether performance-based contracting is improving performance at lower cost in its national laboratories. Also, we are not suggesting that DOE should abandon its performance-based approach or that it should eliminate performance-based fees in its laboratory contracts. It is also not our intent to show that performance-based contracting should be abandoned if its impacts on the laboratories cannot be measured. We believe that effective implementation of performance-based contracting provisions is dependent on the ability to support the fee amounts paid through a cost and benefit analysis. While it may appear intuitively obvious that defining performance expectations and measuring results are effective management tools, it is not intuitively obvious that the government is receiving a reasonable return on its investments in fee amounts for laboratory contractors. Likewise, while DOE commented that increases in fees reflect, in part, the increased financial risks being borne by contractors, no cost-benefit analysis quantifying this increased financial risk has been completed; thus it is not possible to determine if the proper level of fee is appropriate for the risk assumed. 14. We recognize that laboratory contractor fees are relatively small percentages of the total contract amounts. However, these percentages, which translated into $100 million in fees for fiscal year 1998, must be considered in light of the fact that DOE’s laboratories are government owned and that a laboratory contractor’s financial risk is limited. To obtain information on the national laboratories’ contracts, we interviewed officials from the following laboratories: Sandia National Laboratories and Los Alamos National Laboratory in New Mexico; Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and the Stanford Linear Accelerator in California; the National Renewable Energy Laboratory in Colorado; the Idaho National Engineering and Environmental Laboratory in Idaho; the Oak Ridge National Laboratory in Tennessee; and the Argonne National Laboratory in Illinois. We also spoke with laboratory officials in other locations to obtain cost and status information. We asked officials at these laboratories to comment on the impact of performance-based contracting on their operations. We also interviewed Department of Energy (DOE) officials responsible for overseeing these laboratories. These officials were from DOE’s operations offices in Albuquerque, New Mexico; Oakland, California; Oak Ridge, Tennessee; and Chicago, Illinois. We also interviewed DOE area and site office staff located at each of the operations offices we visited. To obtain a broader perspective, we interviewed DOE headquarters officials responsible for developing contracting policy. We conducted our review from September 1998 through April 1999 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) progress in implementing performance-based contracting at its national laboratories, focusing on: (1) the status of performance-based contracting in DOE's national laboratory contracts; and (2) DOE's efforts to determine the impact of performance-based contracting. GAO noted that: (1) DOE's use of performance-based contracting for its laboratories is in a state of transition; (2) while all laboratory contracts GAO examined had some performance-based features, GAO found wide variance in the number of performance measures and the types of fees negotiated; (3) about half of the 18 laboratory contracts have performance fees to encourage superior performance--a major goal of performance-based contracting; (4) most of the remaining laboratory contracts are still based on DOE's traditional fixed-fee arrangement in which the fees are paid regardless of performance; (5) DOE has not evaluated the impact of performance-based contracting on its laboratory contractors and, as a result, does not know if this new form of contracting is achieving the intended results of improved performance and lower costs; (6) specifically, DOE has not determined whether giving higher fees to encourage superior performance by its laboratory contractors is advantageous to the government, although GAO recommended in 1994 that DOE develop criteria for measuring the costs and benefits to the government of using higher fees; (7) fees for the laboratories totalled over $100 million for fiscal year 1998; (8) while the contractors were unable to cite measurable benefits achieved by switching to performance-based contracting, they support its goals; and (9) the main benefits from performance-based contracting cited by laboratory contractors was that it has helped DOE clarify what it expects from the contractors and that it has improved communication.
The child welfare system encompasses a broad range of activities, including child protective services (CPS), which investigates reports of child abuse services to support and preserve families; and foster care for children who cannot live safely at home. maintenance expenses of foster care was estimated at about $3.6 billion in 1997. Additional federal funds are provided to states for a wide range of other child welfare and family preservation and support services, and these were estimated at about $500 million in 1997. As an integral part of the child welfare system, foster care is designed to ensure the safety and well-being of children whose families are not caring for them adequately. Beyond food and housing, foster care agencies provide services to children and their parents that are intended to address the problems that brought the children into the system. Agencies are also required to develop a permanency plan for foster children to make sure they do not remain in the system longer than necessary. Usually, the initial plan is to work toward returning the children to their parents. If attempts to reunify the family fail, the agency is to develop a plan to place the children in some other safe, permanent living arrangement, such as adoption or guardianship. According to federal statute, the court must hold a permanency planning hearing no later than 18 months after a child enters foster care. Proposed federal legislation would shorten this time frame to 12 months, in the hope of reducing the time a child spends in foster care. Some states have already adopted this shorter time frame. Children come to the attention of the child welfare system in two ways—either shortly after birth because they were exposed to drugs or alcohol in-utero or sometime later because they have been abused or neglected. Children with substance abusing parents enter foster care in either way. Many state statutes require that drug- or alcohol-exposed infants be reported, and some of these children are subsequently removed from the custody of their parents if an investigation determines that they have been abused or neglected. In some states, prenatal substance exposure itself constitutes neglect and is grounds for removing children from the custody of their parents. Large numbers of children in foster care are known to have been prenatally substance exposed. In an earlier study, we estimated that close to two-thirds of young foster children in selected locations in 1991 had been prenatally exposed to drugs and alcohol, up from about one-quarter in 1986. In both years, cocaine was the most prevalent substance that young foster children were known to have been exposed to, and the incidence of this exposure increased from about 17 percent of young foster children in 1986 to 55 percent in 1991. Moreover, among those who had been prenatally exposed who were in foster care in 1991, about one quarter had been exposed to more than one substance. The actual number of young foster children who had been exposed to drugs or alcohol in-utero may have been much higher because we relied on the mother’s self-reporting of drug or alcohol use or toxicology test results of the mother or infant to document prenatal exposure. Yet, not all children or mothers are tested at birth for drugs, and even when they are tested, only recent drug or alcohol use can be confirmed. Older children of substance abusing parents also may enter foster care because they have been abused or neglected as a result of their parents’ diminished ability to properly care for them. Abuse and neglect of children of all ages, as reported to CPS agencies, more than doubled from 1.1 million to over 2.9 million between 1980 and 1994, and a Department of Health and Human Services (HHS) report found that the number of CPS cases involving substance abuse can range from 20 to 90 percent, depending on the area of the country. For example, we recently found that about 75 percent of confirmed cases of child abuse and neglect in New York City involved substance abuse by at least one parent or caregiver. Many of these parents live in drug-infested and poor neighborhoods that intensify family problems. Neglect is most frequently cited as the primary reason children are removed from the custody of their parents and placed in foster care. According to the Office of Child Abuse and Neglect, the children of parents who are substance abusers are often neglected because their parents are physically or psychologically absent while they seek, or are under the influence of, alcohol and other harmful drugs. Sixty-eight percent of young children in foster care in California and New York in 1991 were removed from their parents as a result of neglect or caretaker absence or incapacity. No other reasons for removal accounted for a large portion of entries of young children into foster care. Physical, sexual, and emotional abuse combined accounted for only about 7 percent of removals of these young children. Parental substance abuse not only adversely affects the well-being of children, it also places additional strain on the child welfare system. The foster care population increased dramatically between 1985 and 1995 and is estimated to have reached about 494,000 by the end of 1995. As a consequence, foster care expenditures have risen dramatically. Between 1985 and 1995, federal foster care expenditures under title IV-E of the Social Security Act increased from $546 million to about $3 billion. We found that a greater portion of foster care expenditures in some locations shifted to the federal government between 1986 and 1991 because much of the growth in the population of young foster children involved poor families who were eligible for federal funding. Parental substance abuse is involved in a large number of cases. We have previously reported that an estimated 78 percent of young foster children in 1991 in selected locations had at least one parent who was abusing drugs or alcohol. Our recent interviews with child welfare officials in Los Angeles County, California, and Cook County, Illinois, have confirmed that the majority of foster care cases in these counties for children of all ages involve parental substance abuse. Officials in these locations stated not only that cocaine use among parents of foster children is still pervasive but that the use of other highly addictive and debilitating drugs, such as heroin and methamphetamines, appears to be on the rise. In addition, officials confirmed that use of multiple substances is common. In addition to the large number of foster care cases involving parental substance abuse, the complexities of these family situations place greater demands on the child welfare system. Most of the families of the young foster children in selected locations whose case files we reviewed had additional children in foster care, and at least one parent was absent. About one-third of the families were homeless or lacked a stable residence. Some had at least one parent who had a criminal record or was incarcerated, and in some families domestic violence was a problem. In addition, child welfare officials in Los Angeles and Cook Counties recently told us that dual diagnosis of substance addiction and mental illness is common among foster parents. The National Institute of Mental Health reported in 1990 that most cocaine abusers had at least one serious mental disorder such as schizophrenia, depression, or antisocial personality disorder. a woman with four children, all of whom were removed from her custody as a result of neglect related to her cocaine abuse. The youngest child entered foster care shortly after his birth. By that time, the three older children had already been removed from their mother’s custody. All four of the children were placed with their grandmother. The mother had a long history of cocaine abuse that interfered with her ability to parent. At least two of her four children were known to have been prenatally exposed to cocaine. She also had been convicted of felony drug possession and prostitution, lacked a stable residence, and was unemployed. The father was never located, although it was discovered that he had a criminal record for felony drug possession and sales. Despite the mother’s long history of drug use and related criminal activity, she eventually completed a residential drug treatment program that lasted about 1 year, participated in follow-up drug treatment support groups, and tested clean for over 6 months. In addition, she completed other requirements for family reunification, such as attending parenting and human immunodeficiency virus (HIV) education classes, and she was also able to obtain suitable housing. Although the mother was ultimately reunified with her youngest child, it took a considerable amount of time and an array of social services to resolve this case. The child was returned to his mother on a trial basis about 18 months after he entered foster care. The child welfare system retained jurisdiction for about another year, during which family maintenance services were provided. In addition, many foster children have serious health problems, some of which are associated with prenatal substance exposure, which further add to the complexity of addressing the service needs of these families. We found that over half of young foster children in 1991 had serious health problems, and medical research has shown that many of the health problems that these children had, such as fetal alcohol syndrome, developmental delays, and HIV, may have been caused or compounded by prenatal exposure to drugs or alcohol. children places on parents, who are at the same time recovering from drug or alcohol addictions. Some caseworkers find it difficult to manage the high caseloads involving families with increasingly complex service needs. Some states have experienced resource constraints, including problems recruiting and retaining caseworkers, shortages of available foster parents, and difficulties obtaining needed services, such as drug treatment, that are generally outside the control of the child welfare system. Caseworkers are also experiencing difficulties resolving cases. Once children are removed from the custody of their parents, they sometimes remain in foster care for extended periods. The problem of children “languishing” or remaining in foster care for many years has become a great concern to federal and state policymakers. While most children are reunified with their parents, adopted, or placed with a guardian, others remain in foster care, often with relatives, until they age out of the system. The circuitous and burdensome route out of foster care—court hearings and sometimes more than one foster care placement—can take years, be extremely costly, and have serious emotional consequences for children. Yet, making timely decisions about children exiting foster care can be difficult to reconcile with the time a parent needs to recover from a substance abuse problem. Current federal and state foster care laws emphasize both timely exits from foster care and reunifying children with their parents. However, even for those who are able to recover from drug and alcohol addictions, it can be a difficult process that generally involves periods of relapse as a result of the chronic nature of addiction. Achieving timely exits from foster care may sometimes conflict with the realities of recovering from drug and alcohol addictions. The current emphasis on speeding up permanency decisions will further challenge child welfare agencies. the time allowed before holding a permanency planning hearing from 18 to 12 months. As of early 1996, 23 states had already enacted shorter time frames for holding a permanency planning hearing than required under federal law. In two of these states, the shorter time frames apply only to younger children. It should be emphasized, however, that while a permanency planning hearing must be held within these specified time frames, the law does not require that a final decision be made at this hearing as to whether family reunification efforts should be continued or terminated. Some drug treatment administrators and child welfare officials in these same locations believe that shorter time frames might help motivate a parent who abuses drugs to recover. However, expedited time framesmay require that permanency decisions be made before it is known whether the parent is likely to succeed in drug treatment. While one prominent national study found that a large proportion of cocaine addicts failed when they attempted to stay off the drug, we previously reported that certain forms of treatment do hold promise. In addition, progress has been made in the treatment of heroin addiction through traditional methadone maintenance programs and experimental treatments. However, even when the parent is engaged in drug treatment, treatment may last up to 1 or 2 years, and recovery is often characterized as a lifelong process with the potential for recurring relapses. Some drug treatment administrators in Los Angeles and Cook Counties believe that treatment is more likely to succeed if the full range of needs of the mother are addressed, including child care and parenting classes as well as assistance with housing and employment, which help the transition to a drug-free lifestyle. These drug treatment administrators also stressed how important it was for parents who are reunited with their children to receive supportive services to continue their recovery process and help them care for their children. behavior. Some caseworkers in Los Angeles and Cook Counties said that shorter time frames for holding a permanency planning hearing may be appropriate in terms of the foster child’s need for a permanent living arrangement. However, they also said that the likelihood of reunifying these children with their parents when permanency decisions must be made earlier may be significantly reduced when substance abuse is involved. In their view, the prospects of reunifying these families may be even worse if the level of services currently provided to them is not enhanced. In our ongoing work, we have found that states and localities are responding to the need for timely permanency for foster children through programmatic initiatives and changes to permanency laws. Most of these initiatives and changes to permanency laws are very new, so there is little experience to draw upon to determine whether they will help achieve timely exits from foster care for cases involving parental substance abuse. Furthermore, some of these initiatives and changes are controversial and reflect the challenge of balancing the rights of parents with what is in the best interest of the child, within the context of a severely strained child welfare system. For example, California and Illinois have enacted statutory changes that specifically address permanency for foster care cases involving parental substance abuse. The Illinois legislature recently enacted new grounds for terminating parental rights. Under this statute, a mother who has had two or more infants who were prenatally exposed to drugs or alcohol can be declared an unfit parent if she had been given the opportunity to participate in treatment when the first child was prenatally exposed. California has enacted new statutory grounds for terminating family reunification services if the parent has had a history of “extensive, abusive, and chronic” use of drugs or alcohol and has resisted treatment during the 3-year period before the child entered foster care or has failed or refused to comply with a program of drug or alcohol treatment described in the case plan on at least two prior occasions, even though the programs were available and accessible. While such laws may help judges make permanency decisions when the prospects for a parent’s recovery from drug abuse seem particularly poor, these changes are not without controversy. Some caseworkers and dependency court attorneys in Los Angeles and Cook Counties expressed concerns that a judge may closely adhere to the exact language in the statutes without considering the individual situation, and may disregard the extent to which progress has been made toward recovery during the current foster care episode. States and localities are undertaking programmatic initiatives that may also help to reconcile the goals of family reunification and timely exits from foster care, which may conflict, particularly when parental substance abuse is involved. New permanency options are being explored as are new ways to prevent children from entering foster care in the first place. We previously reported on Tennessee’s concurrent planning program that allows caseworkers to work toward reunifying families, while at the same time developing an alternate permanency plan for the child if family reunification efforts do not succeed. Under a concurrent planning approach, caseworkers emphasize to the parents that if they do not adhere to the requirements set forth in their case plan, parental rights can be terminated. Tennessee officials attributed their achieving quicker exits from foster care for some children in part to parents making more concerted efforts to make the changes needed in order to be reunified with their children. In addition, both California and Illinois have federal waivers for subsidized guardianship, under which custody is transferred from the child welfare agency to a legal guardian. In Illinois, CPS cases involving prenatally substance exposed infants can be closed by the child welfare agency without removing the child from the mother’s custody if the mother can demonstrate sufficient parental capacity and is willing to participate in drug treatment and receive other supportive services. One jurisdiction is developing an approach to deliver what its officials describe as enriched services to the parent. Illinois’ new performance contracting initiative provides an incentive for private agencies to achieve timely foster care exits for children by compensating these agencies on the basis of their maintaining a prescribed caseload per caseworker. This necessitates that an agency find permanent living arrangements for a certain number of children per caseworker per year, or the agency absorbs the cost associated with managing higher caseloads. A component of this initiative is the provision of additional resources for improved case management and aftercare services in order to better facilitate family reunification and reduce the likelihood of reentry. Providing enriched services may make it less likely that judges will rule that the child welfare agency has failed to make reasonable efforts to reunify parents with their children and thereby reduce delays in permanency decisionmaking. substance exposure, or later in life when they are found to have been abused or neglected. The families of these children have increasingly complex service needs. Many are dually diagnosed with drug or alcohol addictions and mental illnesses, some are involved in criminal activities, some are homeless, and most have additional children in foster care. Burgeoning foster care caseloads entailing these complex family situations have placed enormous strains on the child welfare system. In seeking to achieve what is in the best interest of children, foster care laws emphasize both family reunification and achieving timely exits from foster care for children. Given the time it often takes a person to recover from drug and alcohol addictions, and the current emphasis on speeding up permanency decisions for foster children, these goals may conflict. Reconciling these goals for children whose parents have a substance abuse problem presents a tremendous challenge to the entire child welfare system in determining how to balance the rights of parents with what is truly in the best interest of children. New state and local initiatives may help address this challenge. Through our ongoing work, we are continuing to explore the impact of parental substance abuse on foster care, by, for example, examining parents’ substance abuse histories and their drug treatment experiences, as well as exploring initiatives that might help achieve timely foster care exits for cases involving parental substance abuse. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions from you or other Members of the Subcommittee. Child Protective Services: Complex Challenges Require New Strategies (GAO/HEHS-97-115, July 21, 1997). Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise (GAO/HEHS-97-73, May 7, 1997). Cocaine Treatment: Early Results From Various Approaches (GAO/HEHS-96-80, June 7, 1996). Child Welfare: Complex Needs Strain Capacity to Provide Services (GAO/HEHS-95-208, Sept. 26, 1995). Foster Care: Health Needs of Many Young Children Are Unknown and Unmet (GAO/HEHS-95-114, May 26, 1995). Foster Care: Parental Drug Abuse Has Alarming Impact on Young Children (GAO/HEHS-94-89, Apr. 4, 1994). Drug Abuse: The Crack Cocaine Epidemic: Health Consequences and Treatment (GAO/HRD-91-55FS, Jan. 30, 1991). Drug-Exposed Infants: A Generation at Risk (GAO/HRD-90-138, June 28, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the implications of parental substance abuse for children and the child welfare system, and permanency planning for foster care cases involving parental substance abuse, focusing on reviews of the substance abuse histories and drug treatment experiences of parents, as well as initiatives that might help achieve timely exits from foster care for cases involving parental substance abuse. GAO noted that: (1) for many children, it is parental substance abuse that brings them to the attention of the child welfare system; (2) when a newborn has been found to have been prenatally exposed to drugs or alcohol, this often triggers an investigation of suspected child abuse and neglect; (3) in some states, prenatal substance exposure itself constitutes neglect and is grounds for removing a child from its parents; (4) substance abuse can damage a parent's ability to care for older children as well, and can lead to child abuse or neglect; (5) as a result, some of these children are removed from the custody of their parents and placed in foster care; (6) once a child is in the system, parental substance abuse is a significant hurdle in their path out of the system--a hurdle that requires drug or alcohol treatment for the parent in addition to other services for the family; (7) the nature of drug and alcohol addiction means a parent's recovery can take a considerable amount of time; (8) other problems these parents face, such as mental illness and homelessness, further complicate these cases; (9) foster care cases that involve parental substance abuse, therefore, place an additional strain on a child welfare system already overburdened by the sheer number of foster care cases; (10) child welfare agencies are charged with ensuring that foster care cases are resolved in a timely manner and with making reasonable efforts to reunite children with their parents; (11) ideally, both of these goals are to be achieved; (12) however, even for parents who are able to recover from drug or alcohol abuse problems, recovery can be a long process; (13) child welfare officials may have difficulties making permanency decisions within shorter time frames before they know whether the parent is likely to succeed in drug treatment; (14) so, when parental substance abuse is an issue in a foster care case, it may be difficult to reconcile these two goals; and (15) the foster care initiatives and laws that some states and localities are instituting may help reconcile the goals of family reunification and timely exits from foster care for the cases involving parental substance abuse.
The United States has participated in a number of contingency operations since the end of the Persian Gulf War. The Department of Defense (DOD) describes contingency operations as military operations that go beyond the routine deployment or stationing of U.S. forces abroad but fall short of large-scale theater warfare. They include smaller-scale combat operations, peace operations, and other missions, such as humanitarian assistance. Contingency operations involving U.S. military forces have included (1) operations in support of U.N. peace operations in the former Yugoslavia, Haiti, Somalia, and Southwest Asia; (2) increased deployment of military capability to Southwest Asia and South Korea in response to heightened tensions; and (3) other key missions, including humanitarian and refugee assistance, such as support for Rwandan refugees. Two broad cost categories are associated with contingency operations—incremental and total. DOD reports the incremental costs of its participation in contingency operations. As used in this report, “incremental costs” means those costs that would not have been incurred except for the operation. This is the same definition contained in the Omnibus Budget and Reconciliation Act of 1990 (P.L. 101-508) as well as in the fiscal year 1996 DOD Authorization Act. Examples of incremental costs are (1) special payments to participating military personnel, such as imminent danger pay; (2) transportation costs to deploy personnel to the area of operations; (3) contractor support for deployed forces; and (4) reconstitution of equipment used in the operation. In addition to incremental costs, DOD incurs costs to maintain a standing military force. These costs include (1) basic military pay; (2) fuel, spare parts, and maintenance to train and sustain military forces; and (3) procurement of military equipment, such as aircraft, ships, and tanks. These costs would be incurred regardless of whether military forces deployed to a contingency operation or remained at their home station, and they are not to be included in contingency cost reporting. The costs to participate in contingency operations have been substantial. Since fiscal year 1992, DOD has reported more than $7 billion in incremental costs. Table 1.1 provides detail on the reported incremental costs. Through fiscal year 1996, DOD has not budgeted for the cost of military operations or contingencies. It has budgeted to be ready to conduct such operations. When the services have had to conduct contingency operations, they initially have had to shift funds within existing appropriations. This is done primarily by using funds appropriated for the same purpose but scheduled for obligation later in the fiscal year. Subsequently, DOD has often sought supplemental funding or reprogramming of appropriated funds to cover its costs. In fiscal year 1993, to pay for the cost of operations in Somalia, DOD asked for and received a supplemental appropriation that also rescinded $750 million from other areas within its budget. Congress provided no new funds. In fiscal year 1994, DOD received two supplementals to fund ongoing contingency operations: one for $1.2 billion in February 1994 and one for $299 million in September 1994. The February 1994 supplemental was funded with a mix of new budget authority and rescission of existing authority. The September 1994 supplemental provided funding through the Defense Emergency Relief Fund, which was designed to reimburse other appropriation accounts for costs incurred in responding to emergencies. In fiscal year 1995, DOD received a supplemental appropriation that included $2.2 billion intended for contingency operations. This supplemental also contained a mix of new budget authority and rescissions. For fiscal year 1996, the administration is seeking $620 million in supplemental funding and the reprogramming of $991 million in previously appropriated funds for DOD’s incremental costs associated with the deployment of U.S. forces to implement the Bosnia peace agreement. The administration proposes to fully offset the supplemental with a corresponding rescission. DOD’s estimate of supplemental funding needs is based on cost estimates developed before or during the operation. Developing the estimate involves (1) making assumptions about a number of factors, including the number of military personnel needed to conduct the operation, its expected duration, and the logistical requirements to support operations and (2) costing out the assumed force, using standard cost factors that are in turn based on historical costs and, where there are no cost factors, military judgment. For existing operations, DOD projects costs based on costs incurred in the previous year unless it expects an operation to change in size, scope, or duration. The estimate of required supplemental funding is prepared and reviewed within DOD and then reviewed by the Office of Management and Budget (OMB). The reviews can result in some parts of the request being deleted and other costs being added to the request. Overall, the cost estimates tend to increase as the estimates move through the review process. Within DOD, the Office of the Secretary of Defense (OSD) Comptroller reviews service submissions and prepares a program budget decision that serves as the basis for senior level DOD review. In the last 2 years, the estimates we have reviewed have tended to increase during final review within DOD and OMB. For example, in developing the cost estimate for fiscal year 1995 contingency operations, the overall estimate increased by $104 million. Within this amount, the OSD Comptroller reduced proposed funding for part of one operation, Cuba migrant operations, by $3 million and increased funding by $107 million for several other operations, including operations in Haiti. DOD’s estimated costs as expressed in the approved program budget decision are also reviewed within OMB when the administration seeks supplemental funding. OMB, for example, increased the estimate of required fiscal year 1995 funding by $126 million over the amount proposed by DOD. OMB increased many of the estimates for required funding to support contingency operations. The biggest single increase was for operations in Cuba—an increase of $16 million to cover additional operating and maintenance costs and personnel costs. In developing the cost estimate for operations in and around Bosnia for fiscal years 1996 and 1997, the estimated cost increased by $164 million between the preliminary program budget decision and the final one submitted to the Deputy Secretary of Defense. Within the total increase, some estimated costs increased while others decreased. For example, the estimate for support costs for personnel in Bosnia decreased by $20 million, and the estimate for contractor support increased by $32 million. In approving the final decision document, the Deputy Secretary added $96 million to the amount proposed in the final program budget decision to reflect December 1995 special pay for military personnel deployed in and around Bosnia and command and control augmentation initiatives, which had not been included in the estimate submitted to him. In developing the plan for financing the operation, DOD further increased the estimated cost by almost $82 million. Actual costs can vary from the estimate as changes occur to the operation. For example, DOD originally estimated that Vigilant Warrior, the deployment of military forces to Southwest Asia in response to Iraqi troop movements, would cost $462 million in fiscal year 1995. However, the operation ended in December 1994 and cost only $258 million. This was about $204 million below the estimate because the operation concluded sooner than expected as the Iraqis pulled their troops back. On the other hand, DOD estimated that the operation in Haiti would cost $465 million in fiscal year 1995, but through the end of fiscal year 1995, DOD had reported costs of $569 million. This was in part due to unforeseen requirements associated with Operation Uphold Democracy and U.N. effort. As previously mentioned, DOD has reported more than $7 billion in incremental costs for its participation in contingency operations since the end of the Gulf War. The number, size, and scope of these operations have increased over the past several years as the United States has responded to events in a number of locations, including Somalia, Haiti, Bosnia, Rwanda, and Iraq. The U.S. participation in implementing the Dayton Peace Accords, including deploying over 18,000 troops to Bosnia-Herzegovina, which began in December 1995, and the April 1996 evacuation of American civilians from Liberia are the latest examples of the kinds of operations in which the United States has become engaged. Contingency operations such as these begin when the President decides to commit U.S. military forces to respond to developing world conditions that he judges affect U.S. interests. For new operations, DOD develops cost estimates before the actual deployment of military forces or early in the deployment. These estimates are commonly used as the basis for requested additional funds to cover operation costs. As the operation progresses, the incremental costs that are incurred are reported. Contingency cost reports consequently are important for monitoring the adequacy of funding for such operations as well as for a variety of other purposes. They help DOD monitor the resources necessary to support contingency operations, which enables DOD to determine the implications for readiness when drawing from previously appropriated operation and maintenance funds to cover contingency costs. They also aid DOD in developing requests for supplemental appropriations and reprogrammings. DOD uses them to respond to congressional and public interest about the incremental cost of specific operations as well as all contingency operations. Also, they facilitate congressional oversight of the expenditure of appropriated funds and the assessment of the financial impact of contingency operations on DOD’s spending plans. Cost reporting is also important for reimbursement. DOD’s financial management regulation for contingency operations discusses proper identification and reporting of costs to support billings. This is important in instances where the United States is due reimbursement from the United Nations and for the distribution of reimbursements to applicable organizations. DOD financial guidance requires that the services use existing systems to record their costs of contingency operations. The services record costs as they are incurred during the course of the mission. Therefore, unlike the estimates of costs developed before and during the mission, cost reporting is historical in nature. Some costs are initially recorded by the military unit that incurs the cost. For example, personnel in an Army unit that purchases spare parts to prepare equipment for deployment record the purchase against the financial management code assigned for the operation. Individual costs are consolidated at Army headquarters from the Army’s financial management system. Other costs are determined centrally. For example, Air Force units report their flying hours, but Air Force headquarters translates those flying hours into costs. DOD consolidates the services reports and publishes a monthly contingency cost report. In response to a request from the Chairmen of the House Committees on International Relations and National Security, we reviewed (1) the accuracy of DOD’s reported incremental costs for contingency operations and (2) the adequacy of DOD guidance and accounting systems to ensure accurate cost reporting. To review the accuracy of reported costs, we examined the methodology for reporting incremental costs and contingency operation cost reports for fiscal years 1994 and 1995 throughout DOD. We used auditor’s judgment as prescribed by the American Institute of Certified Public Accountants to determine the materiality of reporting inaccuracies. We conducted our work at the OSD (Comptroller); all service headquarters; and major commands within each service that were heavily involved in determining incremental costs, including the U.S. Army Forces Command and U.S. Army, Europe, the Air Force’s Air Combat Command, Air Mobility Command, and U.S. Air Forces in Europe, the Navy’s Commanders in Chief, Atlantic and Pacific Fleet, the Commander, Marine Forces Atlantic, and the U.S. Transportation Command. We also conducted work at individual military units that participated in contingency operations in fiscal years 1994 and 1995. Individual units visited included the XVIII Airborne Corps in Fort Bragg, North Carolina; 101st Air Assault Division in Fort Campbell, Kentucky; 25th Infantry Division in Schofield Barracks, Hawaii; 48th Fighter Wing in Lakenheath, England; 1st Fighter Wing in Langley, Virginia; and the II Marine Expeditionary Force in Camp LeJeune, North Carolina. At the locations we visited we discussed how the incremental costs of contingency operations are captured and reviewed costs reported at that location. We also examined the guidance that the locations had received from higher command levels regarding contingency cost reporting and the views of cognizant officials at those locations regarding the adequacy of the available guidance. To assess the adequacy of DOD’s accounting systems and internal controls, we drew from our past work and the work of the DOD Inspector General (IG) and the military audit agencies on the financial audits required under the Chief Financial Officers Act. We also examined DOD’s annual Federal Managers’ Financial Integrity Act (FMFIA) Statement of Assurance, which detail the significant internal control problems of DOD and the military services. We consulted with DOD’s IG and the military audit agencies regarding planned work to assess the extent of audits of contingency cost reporting. We discussed the extent to which cost reports are reviewed for accuracy at locations visited. We conducted our review between May and December 1995 in accordance with generally accepted government auditing standards. In fiscal years 1994 and 1995, we found inaccuracies in DOD’s reported contingency operation costs that represented about 7 percent of reported incremental costs. The American Institute of Certified Public Accountants states that a financial auditor must consider materiality in the scope of work to be performed and leaves the threshold of what constitutes a material weakness to the auditor’s judgment. In our judgment, a 7 percent known error as well as potential errors in the unauditable amounts discussed below, constitutes a material weakness in the accounting systems. A material weakness raises questions about the reliability of reported costs. DOD reported incremental costs of about $4.1 billion for contingency operations that occurred in fiscal years 1994 and 1995. We identified about $104 million in overstated costs and about $171 million in understated costs between fiscal years 1994 and 1995. Table 2.1 summarizes these over- and understatements by appropriation. We also found instances where the accuracy of some reported costs could not be determined. It was not feasible to examine all reported cost data, and our results are not statistically projectable. Consequently, we are not able to conclude whether on balance the sum of reported incremental costs are overstated or understated. The services do not have financial management systems that capture actual incremental costs. Therefore, the services use various financial management systems to identify obligations and modify them to arrive at their incremental costs. In addition, with the exception of the Army, the services receive input from their subordinate commands. As required by DOD guidance, the services are then to offset their incremental costs by those costs for which funds were appropriated, but not spent because of participation in the contingency operation. For example, reported costs should be adjusted for such functions as training not conducted and base operations not provided. However, we found that this was not always being done. We found instances where reported incremental costs were not offset by normal operating costs, such as base operations, that were saved due to participation in contingency operations. For example, Army Headquarters did not apply these offsets when calculating its incremental costs. In fiscal year 1995, the U.S. Army Forces Command, which was responsible for supporting most of the Army forces involved in contingency operations in that year, estimated that military units did not incur about $11 million in funded normal operating costs as a result of deploying to contingency operations. Therefore, these funds were recoverable. For example, U.S. Army Forces Command estimated that Fort Drum, home of the 10th Mountain Division (Light), had not incurred almost $3 million in base operations cost because a large part of the division was deployed to Haiti and so was not incurring some normal day-to-day costs. Although Forces Command officials made this information available to senior command officials, no action was taken to offset reported costs. In another instance, the 25th Infantry Division (Light), part of the U.S. Army, Pacific saved $1 million from normal operating costs while part of the division was deployed to Haiti because its operating costs during deployment were paid by another Army command. Thus, the division was able to use the $1 million it saved from normal operating costs not incurred to acquire equipment it could not otherwise afford within its annual budget. Neither the division nor higher command levels in the Army adjusted these reported contingency costs to reflect these savings. The Army chose to allow units that had savings to retain and use them for otherwise unfunded needs. In fiscal year 1996, at least one major command, U.S. Army, Europe, has identified offsets and adjusted its funding estimate accordingly. U.S. Army, Europe, which has the lead responsibility for tracking U.S. Army costs resulting from participation in implementing the Dayton Peace Accords in Bosnia, anticipates that it will not incur about $113 million in fiscal year 1996 normal operating costs as a result of its participation in Bosnia. In reporting fiscal year 1996 costs, the Army should offset its incremental costs by this amount. We identified two cases where the services did not adjust reported costs to offset training not conducted due to participation in contingency operations. In fiscal year 1995, the Army’s 25th Infantry Division (Light) reported about $2 million in costs associated with deploying to Haiti. The division was scheduled to participate in a training exercise at the Joint Readiness Training Center. U.S. Army, Pacific, the command that funds the division, had budgeted more than $6 million for the cost of this exercise. Because elements of the division’s brigades were deployed to Haiti, the division did not participate in this exercise. The 25th Infantry Division’s (Light) cost to participate in the Haiti operation was not adjusted to reflect the savings from the missed training exercise. The Department of the Army directed the U.S. Army, Pacific to use $2 million of the $6 million to fund other Army requirements and allowed the command to retain the remaining $4 million, which it used to cover all of the 25th Infantry Division’s (Light) $2 million cost associated with Haiti and to meet otherwise unfunded needs. Consequently, we believe that the division’s $2 million in incremental costs were fully offset by the training costs that were not incurred and there were no incremental costs. Another example of reported incremental costs not offset by training not conducted was found in the Air Force. In fiscal year 1994, the 48th Fighter Wing had to cancel its participation in several training exercises because of contingency operations. U.S. Air Forces, Europe budgeted $1 million for this training. Although the training was canceled, the command made no adjustment to its reported incremental costs. In these examples, neither the Army nor the Air Force adjusted its reported costs to reflect the savings from canceled training. For the Army, this was because Army headquarters did not apply any offsets, including those related to canceled training, to its reported costs for fiscal year 1995. The Air Force example was due to U.S. Air Forces, Europe not offsetting its reported costs that were forwarded to Air Force headquarters, who prepared the final cost report. We found instances where the services overstated flying hour costs. For example, the Air Force overstated fiscal years 1994 and 1995 incremental flying hour costs by $67 million. The reasons for the overstatement were that it (1) did not use actual cost factors to report costs in fiscal years 1994 and 1995 and (2) did not offset its flying hour costs by the value of free fuel received from the Kingdom of Saudi Arabia in fiscal year 1994. According to Air Force officials, the Air Force calculated its incremental flying hour costs by applying budgeted cost factors for supplies, repair parts, maintenance, and fuel to the additional hours flown per aircraft above budgeted flying hours. These cost factors reflect the average historical costs, by aircraft type, to operate, not their actual costs. In fiscal year 1994, we determined that U.S. Air Forces, Europe actual costs of contingency flying hours were about $19 million below that funded by the Air Force. According to command officials, when actual costs are lower than the budgeted factors for an aircraft, the command is allowed to retain the excess. Conversely, when costs are above these factors, the command must absorb the increased cost within its budget. For example, we calculated that Air Combat Command had to absorb $4 million for fiscal year 1994 because its actual costs were higher than the budgeted factors. In the same year, Saudi Arabia provided the Air Force free fuel valued at $45 million to support contingency operations in Southwest Asia. The Air Force failed to deduct the value of this free fuel when computing its incremental flying hour costs for Operation Southern Watch, thereby overstating its costs. In fiscal year 1995, the Air Force headquarters revised its methodology for calculating incremental flying hour costs to reflect the receipt of free fuel. However, it continued to compute the flying hour costs by using the budgeted versus actual cost factors. Consequently, Air Force headquarters computed almost $74 million for Air Combat Command’s incremental flying hour costs, while the command’s actual costs were about $7 million lower because actual costs were less than the budgeted cost factors. We did not have comparable data to calculate U.S. Air Forces, Europe’s costs. We also found that the Navy used either budgeted or actual cost factors when determining its incremental flying hour cost. The Navy Atlantic Fleet used budgeted rates reflecting average historical cost applied to its additional flying hours, while the Pacific Fleet applied actual rates for the most part when determining its costs. The Navy Pacific Fleet provided an example where the actual rates were higher than the budgeted costs allowed for fiscal year 1995. For its F/A-18A, F/A-18C, and P-3C aircraft involved in contingencies, the actual costs per hour for these aircraft were higher than the budgeted costs by $2,632, $126, and $41, respectively. Furthermore, Navy officials also stated that the Navy’s fiscal year 1994 and 1995 flying costs were not adjusted for free aviation fuel. In fiscal year 1994, the Navy reported almost $6 million for aviation fuel used in Operation Southern Watch but did not adjust this amount by the $2 million of free fuel received. Therefore, the Navy overstated its reported costs by 33 percent. Similarly, in fiscal year 1995, the Navy did not adjust its reported costs to reflect $1 million of free fuel received for operations in Southwest Asia. Navy headquarter officials stated that the value of free fuel received for operations was accounted for in its flying hour costs factors, which were used to determine flying hour costs. However, the Navy did not provide documentation to support its position. The value of items that have been purchased for contingency operations but not used and either retained by the unit or returned to the DOD supply system for credit is not deducted from reported incremental costs. For example, the U.S. Army, Europe purchased parachutes and rigging supplies to support the airdrop mission over Bosnia during 1994. The cost of this material, over $23 million, was reported as an incremental cost of that contingency operation. In July of 1994, the Army Audit Agency found that $12 million of this equipment was excess to the command’s needs and recommended that the command turn in the supplies to receive a credit. However, U.S. Army, Europe officials did not concur with this recommendation and retained the material. They believed that the equipment was necessary for immediate future contingency airdrop operations. Also, they did not want to turn in the inventory because they believed that it made little sense to turn in equipment, receive a limited credit for it, and then buy it back at full price for anticipated operations. Regardless of whether the material was returned, the reported incremental cost should have been adjusted by the $12 million of material not used in the contingency operation. However, neither the command nor Army headquarters adjusted the reported cost. In fiscal year 1995, the Army’s 101st Airborne Division was told that it would deploy to a contingency operation in Southwest Asia. Although the Division incurred some costs to prepare to deploy, it ultimately did not deploy. However, it reported incremental predeployment costs of $14 million. The reported costs were not adjusted by Army headquarters to offset the value of items not used. Division officials estimated that about $7 million or more of the reported costs were incremental, the balance was eventually used for normal operating costs. When items that are purchased for use in a contingency are ultimately not used and are returned to the supply system, they are credited to a general operation and maintenance fund, but the reported incremental costs of the contingency are not reduced. According to Army officials, there is no system in place to ascertain which of the supplies turned in were originally purchased for a contingency operation. In addition, the DOD guidance on reporting contingency costs does not specifically require the services to turn in these items as a means to credit the reported costs of the contingency operation or adjust reported costs to reflect the value of items purchased for contingency operations but not used in them. The reported incremental military personnel costs for reservists volunteering for or called to active duty are not adjusted to reflect regular monthly reserve pay that is not being incurred. When federalized, reservists receive active duty base pay plus allowances, and special pays, some of which are based on where they are deployed. According to Army and Air Force officials, they do not adjust reported incremental military pay for reservists on active duty in contingencies by the monthly reserve pay they would have received had they not been activated. We believe that because these are costs that are not incurred as a result of contingency operations, the incremental personnel costs should be offset by these amounts. We did not determine the amount of this overstatement. We found instances where services did not report certain incremental contingency costs. The unreported costs included military personnel pay, aviation parts, and procurement. Military personnel who deploy to contingency operations become eligible for special pay and allowances such as imminent danger pay, certain places pay (formerly known as foreign duty pay), and family separation pay. We found that the Air Force did not report about $81 million of the almost $100 million it estimated as incremental personnel costs for fiscal year 1994. According to Air Force officials, they were not aware that they were required to track and report these costs. Table 2.2 compares the Air Force’s reported incremental personnel costs for fiscal year 1994 with the estimated costs that should have been reported. Air Force headquarters officials said that, in fiscal year 1994, they reported the amount of supplemental funding received for military personnel as their military personnel incremental costs. However, beginning in fiscal year 1995, the Air Force tracked and reported these costs. We found one instance where an Air Force command did not report the value of aviation spare parts used in contingency operations at one of its bases. The cost was instead recorded in the base’s account for normal operations. According to a 1st Fighter Wing official, if parts are not available to support deployed aircraft during contingency operations they are removed or cannibalized from base aircraft and replaced when needed parts are received. Between May 1994 and October 1995, the command’s 1st Fighter Wing estimated that it had used approximately $10 million in spare parts to support operations in Southwest Asia. However, the Wing official stated that this cost was charged to the base maintenance account when replacement parts were cannibalized for contingency operations rather than reported as contingency cost because in their view the base accounting system only allows one account for maintenance. Thus, the contingency-related maintenance costs were commingled with normal base maintenance costs. We found that the Air Force did not report $12 million to replace mobility equipment (tents, field kitchens, water systems, and warehouse and maintenance facilities) that were reported as lost during contingency operations in fiscal year 1995. An Air Force official stated that replacement costs for these items were not considered incremental costs because funding for these items was requested through the budget process and additional expenses were not incurred. However, we believe that these replacement costs should have been reported as incremental costs because the equipment was used in support of contingency operations and in all likelihood would not have been rendered unusable or destroyed were it not for the operation. The Navy and the Air Force are not tracking and reporting the cost of munitions used in contingency operations. During the 1995 bombing campaign in Bosnia, the two services consumed almost $64 million worth of munitions—$48 million for the Navy and $16 million for the Air Force. According to an Air Force official, Air Force munitions were not drawn from excess stock levels. A Navy official said that Navy munitions will have to be replaced. DOD officials told us that services are not reporting the costs of munitions consumed in contingency operations because they absorbed munitions procurement costs in normal budgets and do not consider the value of munitions consumed in contingencies to be incremental costs. However, since these are costs that would not have been incurred were it not for the operation, we believe that they should be included in reported costs. The accuracy of reported costs could not be determined for some cost categories relating to active and reserve military personnel and transporting personnel and equipment. Military personnel deployed to contingency operations become eligible for several types of special pays and allowances. These can include imminent danger pay, certain places pay, and family separation pay. These types of special pays and allowances would not be paid were it not for the contingency operation, so they are considered incremental costs. Because the military services cannot readily extract the amount of the special pays and allowances from their military pay system, they estimate the incremental pay costs. This makes it difficult to ascertain the accuracy of these costs, which may be over- or understated. In addition, the military services do not all use the same estimating methodology. In fiscal year 1995, the Air Force applied estimated cost factors to the actual number of deployed active military personnel to derive special pays and allowances. The Army and the Marine Corps also applied estimated cost factors but instead used the estimated number of deployed active personnel. The Navy, on the other hand, reports actual costs from its military pay center but only included imminent danger pay. This is because the nature of the Navy’s normal deployment schedules already include special pays and allowances, such as family separation pay. Therefore, those pays are not characterized by the Navy as incremental. Officials from the other services told us that they do not reconcile reported incremental personnel costs to actual payroll cost reports to determine the accuracy of their reported estimated contingency figures. Although the services’ payroll systems calculate how much the personnel deployed should receive in special pays and allowances and are able to distinguish between geographic locations, they are not configured to distinguish between contingency operations and other deployments. On the other hand, the personnel systems are capable of determining who is deployed and the location of this deployment. However, the payroll and personnel systems are not configured or linked to provide the incremental contingency costs. DOD has plans underway to link the payroll and personnel systems. On the basis of discussions with Defense Finance Accounting System officials (who operate the military payroll system) and OSD officials in the military personnel arena, we believe that linking the payroll and personnel systems to allow the extraction of actual special pay and allowance costs may only involve providing one additional space in computer records to allow for a code indicating that a service person is deployed to a contingency operation. Until the two systems are linked it is not possible to test the accuracy of estimated costs. The services also use estimates rather than actual data to derive incremental personnel costs for reservists on active duty in contingency operations. The Air Force and the Army multiply actual numbers of reservists participating in contingency operations by estimated base salary as well as by the special pays and allowances. According to Navy officials, the Navy did not report any incremental personnel costs for the number of reservists who supported contingency operations in fiscal years 1994 and 1995. Although paid and reported as incremental contingency costs, the Army and the Air Force question some of their bills to transport personnel and equipment for contingency operations, which may have resulted in overstated costs. Allowable transportation costs include moving personnel, material, equipment, and supplies to the contingency area. The services pay the Military Traffic Management Command for port handling, the Military Sealift Command for sealift, and the Air Mobility Command for airlift. If billing errors exist, reported incremental transportation costs are also inaccurate. The U.S. Army Forces Command identified $11 million in transportation charges it believed to be invalid for airlift, sealift, and port handling services in fiscal year 1995. This represented 16 percent of the command’s total contingency transportation charges for that year. In the review of bills, command officials found problems such as unidentified customers, duplicate charges, and missing data that prevented the validation of charges. For example, officials estimated that over 50 percent of fiscal year 1995 contingency port handling bills had disputed charges. The command requested transportation providers to respond to disputed charges; however, responses have been minimal and few credits applied. Air Force officials also told us that they have noticed problems with transportation bills but do not have the resources to research and validate the bills. In fiscal year 1995, the Air Combat Command incurred $17 million in airlift contingency charges. Command officials believe that the airlift bills may include errors and reported incremental costs may be inaccurate. For example, command officials told us that some of their bills may include other services’ airlift mission charges. We found a number of inaccuracies in DOD’s fiscal years 1994 and 1995 reported incremental costs, which resulted in costs being overstated, understated, or unable to be determined. In our opinion the magnitude of these inaccuracies is material to the reported costs. With regard to incremental military pay costs, we believe DOD’s plan to link the military pay and military personnel systems will be helpful in capturing actual military pay incremental costs. The DOD Comptroller and the service secretaries have not developed sufficiently specific guidance for identifying contingency operation costs and the methodology for calculating them. DOD and the services also have not taken steps to ensure that cost development methodologies are consistent and that key officials involved in accounting for incremental cost reporting are made aware of guidance. In addition, DOD and the services have not adequately ensured that cost reports are complete, accurate, timely, and appropriately reviewed. This would include that costs are properly recorded and classified and supporting documentation is maintained. Further, DOD accounting systems are classified as high risk and cannot reliably determine incremental costs. In February 1995, DOD added a chapter on contingency operations to its financial management regulations. Prior to the addition of this chapter, contingency cost guidance consisted of a patchwork of messages and directives issued during and since the Gulf War. The new chapter directs the services to provide monthly incremental cost reports in accordance with DOD policy and makes the services responsible for accurate cost reporting. The chapter also requires that controls, accounting systems, and procedures identify and record costs incurred in support of contingency operations; directs that the services use the project code established for an operation; makes service Comptrollers responsible for determining incremental sets out cost reporting requirements; provides broad guidelines for determining costs; and requires that reported costs be adjusted for offsets. DOD guidance is vague about what costs to include as contingency costs and what methodology to use in calculating these costs. Because DOD’s guidance is vague, the services and some commands within the same service calculate costs differently. The guidance is also vague on applying internal controls standards to test the accuracy and completeness of reported costs. Several service officials at the reporting level stated that more specific guidance was needed. DOD officials stated that they are willing to clarify the guidance to the services if this will assist them in their cost reporting. To date, Army guidance has consisted of a series of messages regarding reporting procedures for specific contingency operations, funding responsibilities, and reimbursement from the United Nations. The guidance also notes broad cost categories but does not contain specific guidance as to what costs to report and how to calculate them. The Army has since drafted some implementing guidance that discusses how to calculate offsets and requires major commands to certify costs, but as of April 1996, this guidance had not been approved. Air Force guidance has been limited to describing reporting formats and deadlines. Neither the Navy nor the Marine Corps has developed any formal instructions to guide their subordinate commands. They have relied, for the most part, on oral instructions and electronic mail messages, which have also been limited to reporting procedures. Lack of specific guidance has resulted in services’ not offsetting or reporting certain costs. For example, while the guidance requires reported costs to be adjusted for offsets, such as training not conducted, it does not specify that services’ offsets should include the value of items that have been purchased for contingency operations but not used. U.S. Army, Europe and the 101st Airborne Division did not adjust their reported costs to reflect the value of supplies that were unused in the contingency. DOD’s February 1995 guidance also is not specific about reporting costs for training conducted to prepare for a contingency. As a result, the Army did not report these costs. In fiscal year 1994, aviation elements from the XVIII Airborne Corps incurred almost $1 million in training costs for Haiti that were not included in the Army’s contingency cost report. Also in fiscal year 1994, U.S. Army, Europe officials stated that they did not report predeployment training costs incurred for one division to prepare for Operation Able Sentry in Macedonia. Nonetheless, the Army is now tracking and reporting predeployment training costs associated with the preparation for operations in the former Yugoslavia. The DOD guidance does not specify how the services should calculate incremental personnel costs. It only provides examples of allowable categories of personnel costs such as family separation allowance and imminent danger pay. Consequently, the services calculate their incremental personnel cost differently. The Navy does not include all the special pays and allowances, the Air Force uses actual numbers of deployed personnel to estimate these costs, and the Army and the Marine Corps use an estimate of the number of persons deployed. In some cases, commands within the services also calculate some costs differently. For example, the Navy’s Atlantic and Pacific fleets used different methods to calculate flying hour costs. According to Navy officials, the Atlantic Fleet computed its incremental flying hour costs by determining its incremental hours and multiplying them by budgeted cost factors, while the Pacific Fleet used mostly actual costs. This is because Pacific Fleet officials believe that the difference between actual and budgeted cost factors allows them to fund unforeseen maintenance costs that do not appear until later fiscal years due to increased flying hours in support of contingency operations. Again, using different methods results in inconsistent cost reporting. According to our Standards for Internal Controls, there are standards that are essential for providing the greatest assurance that the objective of accurate reporting will be achieved. These include maintaining supporting documentation, properly recording and classifying costs, and having adequate supervision and review of cost reporting. We found that these generally accepted internal control standards were not always followed. For example, Navy Pacific Fleet officials were unable to support about $2 million of their reported $48 million in incremental costs for Operation Southern Watch because supporting documentation was unavailable. Further, one U.S. Army, Europe division did not capture and report its predeployment training costs for Macedonia and Rwanda. It also recorded some supply costs to the wrong operation. We also found that adequate rigor was not always applied in the review of reported incremental costs. For example, in fiscal year 1994, the Navy inappropriately reported an $8 million reimbursable cost as an incremental cost. The Navy was reimbursed for this cost by the Army. The Army also reported this cost resulting in reported costs being duplicated since both the Army and the Navy reported the same cost. For costs to be reported accurately and in accordance with guidance, service officials involved in the cost reporting need to be aware of this guidance. However, during our visits we found that some officials involved in accounting for incremental cost reporting at the service, major command, and unit levels were unaware of the February 1995 DOD guidance on contingency cost reporting. This included officials at various U.S. Army, Europe subordinate units; the Navy Surface Forces, Pacific Fleet; and the U.S. Marine Corps Headquarters and Commander in Chief, Marine Forces Atlantic. DOD’s Financial Management Regulation notes that data from existing systems shall be used as applicable to determine contingency operation costs and that cost accounting systems will not be established solely to determine incremental costs. Consequently, DOD must develop its cost reports within its existing systems. However, problems exist with DOD’s accounting systems and the reliability of its data. The systems do not provide a firm foundation for DOD’s managers to use in determining incremental costs for contingency operations. The problems we identified in contingency cost reporting stem, in part, from the long-standing and pervasive problems that plague DOD’s accounting systems. DOD’s 1994 and 1995 Federal Managers’ Financial Integrity Act (FMFIA) Statements of Assurance admit long-standing weaknesses in DOD’s financial accounting process and systems. Under the FMFIA and implementing OMB guidance, the Secretary of Defense is required to provide this annual Statement of Assurance to the President and the Congress on whether the Department’s system of internal controls, taken as a whole, complies with the act’s requirements. DOD’s Statements of Assurance cite its financial accounting process and systems as a high-risk area. The Statements note that DOD’s operating accounting systems are not always in compliance with generally accepted government accounting standards or with internal management control objectives. As a result, the quality of financial information is not always reliable, and financial management practices are sometimes inadequate. Additionally, compilation of accurate financial statements is impeded, in part, by the lack of reliable information. Some broad categories of systemic problems reported include inadequate financial property records, unreliable accounting and payroll information, inaccurate or incomplete cost accounting information, improper or incomplete accrual accounting, improper reporting of the results of financial operations, and lack of financial system integration. DOD has made numerous efforts to improve its financial management activities. A significant action was the establishment of a single DOD finance and accounting organization, the Defense Finance and Accounting Service, in January 1991. Its mission is to implement standard accounting policies and procedures throughout DOD. In May 1994, DOD announced the consolidation of over 300 finance and accounting sites into 26 locations. The Defense Finance and Accounting Service also has responsibility for consolidating service incremental cost reports for contingency operations. We stated in a November 1995 testimony that DOD does not yet have adequate financial management processes in place to produce the information it needs to support its decision-making process. We further stated that no military service or major DOD component has been able to withstand the scrutiny of an independent financial statements audit, a requirement established by the Chief Financial Officers Act. DOD’s financial systems are not integrated and do not provide reliable information. Our review of the Navy’s financial reports, the Army Audit Agency’s review of the Army’s financial statements, and the Air Force Audit Agency’s review of the Air Force’s financial statements identified significant accounting and reporting problems resulting in unauditable financial statements and reports for fiscal year 1994. As an example, control practices used in the Navy’s financial operations were fundamentally deficient: accounts and records were not routinely reconciled; periodic physical inventories were not always conducted; undocumented adjustments were common; and the reasonableness of account balances, adjustments, and data presented in financial reports was not regularly reviewed. DOD has acknowledged that its financial management systems are antiquated and cannot be relied upon to provide DOD management and the Congress with accurate and reliable financial information for use in decision-making. DOD’s FMFIA Statements of Assurance also noted that financial data in DOD is inadequately maintained within current accounting systems. In turn, the financial information and statements do not always adequately assist the management functions of budget formulation, budget execution, and proprietary and financial reporting with a high degree of reliability and confidence. An important deficiency cited included the lack of flexibility of most finance and accounting systems to rapidly respond to changing customer bases, legislative changes, contingency operations, management initiatives, requirements from other government central agencies, and other changes. Accordingly, DOD systems are classified as high risk by fiscal year 1994 and 1995 Statements of Assurance and OMB. Our February 1995 High Risk report also cited DOD’s serious and long-standing financial management problems as high risk especially vulnerable to waste, fraud, abuse, and mismanagement. Besides the unreliability of its basic accounting data, DOD does not have cost accounting systems that can reliably state what was actually expended in support of a contingency operation. Existing systems report what was obligated to be spent. Previous audits have shown that such obligations can differ significantly from actual disbursements, resulting in DOD’s paying billions of dollars in the course of normal business without being able to validate payments. For example, vendors were paid $29 billion that cannot be matched to supporting documentation to determine if payments were proper. Such errors can affect the accuracy of the reported contingency costs extracted from these systems. DOD IG and service audits have also cited material weaknesses that exist in accounting and related systems. For example, billing data was not always available, reliable, or accurate to determine the authenticity of claims. To improve the accuracy of cost reporting, we recommend that the Secretary of Defense direct the DOD Comptroller to clarify existing guidance to specify what costs to include in contingency operations cost reporting. At a minimum the guidance should specify the methodology to (1) calculate these costs and (2) adjust costs to reflect offsets. We further recommend that the Secretary of Defense direct the service secretaries to develop comprehensive implementing instructions from the DOD Comptroller’s guidance. This guidance should specify how the services would like reporting units to apply internal controls standards so that incremental costs are adequately supported, recorded, and reviewed. DOD generally concurred with a draft of this report as being a reasonable portrayal of the problems inherent in accounting and reporting for contingency operations. DOD agreed with our recommendations and stated that it will attempt to clarify existing guidance to specify which costs to include in the cost reports and to better explain methodologies to be used and offsets to be incorporated. Regarding our finding that DOD’s guidance is general and incomplete and excludes discussion of offsetting certain costs, DOD commented that chapter 23, volume 12, of the DOD Financial Management Regulation discusses the intent that reported costs be adjusted for offsets, and while the regulation does not list every example of when offsets should be taken, it provides several examples of such offsets. Further, DOD noted that its regulation is intended to impart general guidance for use by the components, not detailed instructions. Such instructions in DOD’s view are best left to the components to formulate to meet their individual requirements and circumstances. DOD’s Financial Management Regulation does require that reported costs be adjusted for offsets and it provides some examples of types of offsets. But, DOD’s guidance is vague about what costs to include as contingency costs and what methodology to use in calculating these costs. In addition, the services have not issued comprehensive implementing instructions. We have clarified our discussion in the executive summary of this report to reflect our overall concern with DOD’s existing guidance.
Pursuant to a congressional request, GAO reviewed the reliability of the Department of Defense's (DOD) reported costs for contingency operations, focusing on the: (1) accuracy of reported incremental costs; and (2) adequacy of DOD guidance and accounting systems to ensure accurate cost reporting. GAO found that: (1) DOD overstated about $104 million in incremental costs and understated or failed to report about $171 million in incremental costs; (2) DOD overstated costs primarily because the services failed to adjust reported incremental costs for normal costs they did not incur; (3) DOD understated costs primarily because the services failed to report certain incremental personnel and munitions costs; (4) it could not determine the accuracy of some reported costs because the services could not readily identify special pay and allowances related to contingency operations; (5) DOD revised its guidance for developing and reporting incremental costs, but the guidance generally remains vague and incomplete; (6) the financial management systems DOD uses to develop incremental cost data are deficient, unreliable, and a high-risk area; and (7) the problems in DOD reporting of incremental cost are indicative of a material weakness in DOD accounting systems.
Ex-Im is an independent agency operating under the Export-Import Bank Act of 1945, as amended. Its mission is to support the export of U.S. goods and services, thereby supporting U.S. jobs. Ex-Im’s charter states that it should not compete with the private sector. Rather, Ex-Im’s role is to assume the credit and other risks that the private sector is unable or unwilling to accept, while still maintaining a reasonable assurance of repayment. When private-sector lenders reduced the availability of their financing after the 2007 to 2009 financial crisis, demand for Ex-Im products correspondingly increased. According to Ex-Im data, the amount of financing Ex-Im authorized increased from $12.2 billion in fiscal year 2006 to $35.8 billion in fiscal year 2012, before declining to $27.3 billion in fiscal year 2013 and $20.5 billion in fiscal year 2014. Though smaller than the fiscal year 2012 peak, Ex-Im’s fiscal year 2014 total authorizations are a 68 percent increase in nominal terms over its total authorizations in fiscal year 2006. Over the same period, Ex-Im’s financial exposure (outstanding financial commitments) increased from $57.8 billion to $112 billion, or by 94 percent in nominal terms. According to U.S. budget documents, Ex-Im’s number of full-time equivalent employees grew from 380 to 397 from fiscal year 2006 through fiscal year 2014, an increase of about 4.5 percent. Ex-Im offers export financing through direct loans, loan guarantees, and insurance. Ex-Im’s loan guarantees cover the repayment risk on the foreign buyer’s loan obligations incurred to purchase U.S. exports. Loan guarantees are classified as short, medium, or long term. Although the number of Ex-Im’s short-term (working capital) guarantees greatly exceeds the number of its medium- and long-term loan guarantees, long- term loan guarantees account for the greatest dollar value of Ex-Im loan guarantees. Ex-Im is one of several ECAs worldwide that provide export financing support. Other countries’ ECAs range from government agencies to private companies contracted by governments. Most, including Ex-Im, are expected to supplement, not compete with, the private market. An international agreement, the Organisation for Economic Co-operation and Development (OECD) Arrangement on Officially Supported Export Credits, governs various aspects of U.S. and other member countries’ ECAs, but increasing activity of nonmembers threatens its ability to provide a level playing field for exporters. Several agreements have been made that decrease subsidies and increase transparency among ECAs. However, these agreements apply to participant ECAs, and important emerging countries, including China, are not part of the OECD arrangement. Ex-Im faces multiple risks when it extends export credit financing, including: Credit risk: the risk that an obligor may not have sufficient funds to service its debt or be willing to service its debt even if sufficient funds are available. Political risk: the risk of nonrepayment resulting from expropriation of the obligor’s property, war, or inconvertibility of the obligor’s currency into U.S. dollars. Market risk: the risk of loss from declining prices or volatility of prices in the financial markets. Concentration risk: risk stemming from the composition of a credit portfolio, for example through an uneven distribution of credits within a portfolio. Foreign-currency risk: the risk of loss as a result of appreciation or depreciation in the value of a foreign currency in relation to the U.S. dollar in Ex-Im transactions denominated in that foreign currency. Operational risk: the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events. During underwriting, Ex-Im reviews a transaction and assigns it a risk rating based on its assessment of the creditworthiness of the obligors and to establish whether there is a reasonable assurance of repayment. Ex-Im also manages risks through (1) monitoring and restructuring—updating risk ratings and restructuring individual transactions with credit weaknesses to help prevent defaults and increase recoveries and (2) recovery of claims—collecting on the assets of the obligors or the collateral for a transaction that defaults. While demand for its services generally drives Ex-Im’s business, Congress has mandated that Ex-Im support specific objectives and operate within certain parameters. For example, since the 1980s, Congress has required that Ex-Im make available a percentage of its total export financing each year for small business. In 2002, this requirement increased from 10 percent to 20 percent of total authorizations. Congress further instructed that Ex-Im promote the expansion of its financial commitments in sub-Saharan Africa. In annual appropriation acts, Congress has directed that “not less than 10 percent of the aggregate loan, guarantee, and insurance authority available to …should be used for renewable energy technologies or end-use energy efficiency technologies”—which we refer to as the renewable energy mandate. Congress has also imposed a limit, currently $140 billion, on Ex-Im’s total aggregate outstanding amount of financing, referred to as the exposure limit. In addition, Ex-Im must provide financing on a competitive basis with other export credit agencies, minimize competition in government- supported export financing, and submit annual reports to Congress on its actions. In six reports on Ex-Im issued since March 2013, we presented findings and made 16 recommendations to improve Ex-Im’s operations, summarized in this testimony in three broad areas: (1) portfolio risk management, (2) underwriting and fraud prevention processes, and (3) exposure forecasting and reporting on estimates of its impact on U.S. jobs. Our recent work has produced several findings and recommendations about how Ex-Im manages risks related to the overall size and composition of its portfolio. Our March 2013 report on risk management and our May 2013 report on exposure, risk, and resources made a total of six recommendations in this area. Ex-Im agreed with all of these recommendations and has taken action to implement them. Ex-Im calculates credit subsidy costs and loss reserves and allowances with a loss estimation model that uses historical data and takes credit, political, and other risks into account. Consistent with industry practices, Ex-Im added qualitative factors to the model in 2012—including a factor to account for changes in global economic conditions— to adjust for circumstances that may cause estimated credit losses to differ from historical experience. However, in March 2013, we concluded that the short-term forecast Ex-Im used to account for global economic changes might not be appropriate for adjusting estimated defaults for longer-term products and could lead to underestimation of credit subsidy costs and loss reserves and allowances. We recommended that Ex-Im assess whether it was using the best available data for adjusting its loss estimates. In November 2013, Ex-Im incorporated a longer-term forecast of global economic change into its loss estimation model. As a result, we consider this recommendation implemented and closed. In our March 2013 report, we also found that Ex-Im was not maintaining the data it needed to compare the performance of newer transactions with older transactions at comparable points in time, a type of analysis recommended by federal banking regulators. This analysis, known as vintage analysis, can help evaluate the credit quality of recent transactions by comparing their early performance with the early performance of older transactions. As such, it can provide early warning of potential performance problems in newer business. Ex-Im’s default rate declined steadily from about 1.6 percent as of September 30, 2006, to 0.29 percent as of September 30, 2012, and, more recently, Ex-Im reported a further decline to 0.17 percent as of the end of December However, we concluded that this downward trend should be 2014.viewed with caution because Ex-Im’s portfolio contained a large volume of recent transactions that had not reached their peak default periods. We recommended that Ex-Im retain point-in-time performance data to compare the performance of newer and older business and enhance loss modeling. Ex-Im began retaining such data in 2013. We therefore consider this recommendation implemented and closed. GAO-13-303. Ex-Im is not bound by federal banking regulator guidance, but it faces risk- Office of the Inspector General, Export-Import Bank of the United States, Report on Portfolio Risk and Loss Reserve Allocation Policies, OIG-INS-12-02 (Washington, D.C.: September 2012). oversight and be consistent with federal internal control standards for effective external communication. We also found that Ex-Im had begun to implement stress testing and recommended that Ex-Im report its stress test scenarios and results to Congress. Ex-Im began reporting its scenarios and results in quarterly reports to Congress on default rates, beginning with the report for the fourth quarter of 2013. In that report, Ex- Im described the stress test scenarios and provided some information about results. Hence, we consider this recommendation implemented and closed. In our May 2013 report, we found that Ex-Im had not routinely reported the performance or risk ratings of its subportfolios for the congressional mandates on small business, sub-Saharan Africa, and renewable energy, though these transactions generally were more risky than Ex-Im’s overall portfolio. We recommended that Ex-Im routinely report to Congress the financial performance of subportfolios supporting congressional mandates. Ex-Im began reporting this information in its default rate report to Congress for the quarter ending June 30, 2013. As a result, we consider this recommendation implemented and closed. GAO-13-620. reorganized to improve efficiency. Additionally, Ex-Im has agreed to implement, and in some cases has begun implementing, suggestions by the contractor to mitigate risks of future workload increases. As a result, we consider this recommendation implemented and closed. In our May 2013 report, we found that Ex-Im expected that administrative resource constraints might prevent it from meeting its congressionally mandated target for small business export financing. The target is fixed to a percentage of the dollar value of Ex-Im’s total authorizations. Although Ex-Im has dedicated resources to support the mandate, as Ex- Im authorizations have grown, the corresponding growth in the value of the target has outpaced Ex-Im’s increasing support. According to Ex-Im officials, processing small business transactions and bringing in new small business customers is resource-intensive. We concluded that it was important for Ex-Im to communicate to Congress the effect of percentage- based mandates on its operations, as well as the potential impacts such mandates might have on Ex-Im’s resources and operations. We recommended that Ex-Im provide Congress with additional information on the resources associated with meeting its percentage-based mandates. Ex-Im agreed and told us it planned to provide information on resources associated with meeting such mandates in its fiscal year 2016 budget submission. Ex-Im’s fiscal year 2016 Congressional Budget Justification includes both information on the resources associated with these mandates and Ex-Im’s plans to hire additional staff to help meet them. As a result, we consider this recommendation implemented and closed. GAO-14-642R. deliveries of 789 Boeing large commercial aircraft, while European ECAs supported deliveries of 821 Airbus large commercial aircraft. Buyers of large commercial aircraft have also used a number of non-ECA financing options for procuring wide-body jets. From 2008 through 2013, Ex-Im and European ECAs supported 26 percent of large commercial aircraft deliveries. Our most recent mandated report, in September 2014, found that Ex-Im had implemented many key aspects of its underwriting process but identified weaknesses in certain procedures.recommendations to Ex-Im to enhance its loan guarantee underwriting process and further document aspects of its underwriting and processes to detect, prevent, and investigate fraud. Our August 2014 report on Ex- Im’s monitoring of dual-use exports also found weaknesses in Ex-Im’s procedures. Our review of a statistical sample of loan guarantees indicated that Ex-Im had implemented many key aspects of the underwriting process as required by its Loan, Guarantee, and Insurance Manual. However, the manual did not (1) include certain procedures or sufficiently detailed instructions to verify compliance with Ex-Im’s requirements and consistency with federal guidance, such as a procedure to verify that applicants did not have delinquent federal debt; (2) include instructions for loan officers to use credit reports and for the inclusion of all required documents and analyses in the loan file prior to approval; and (3) call for assessments of collateral, as required by federal guidance, for certain loan guarantee transactions prior to approval. Furthermore, Ex-Im did not have mechanisms to verify compliance with certain established procedures, including documenting certain loan guarantee eligibility procedures. We recommended that Ex-Im take the following actions: Develop and implement procedures, prior to loan guarantee approval, for (1) verifying that transaction applicants are not delinquent on federal debt and (2) performing assessments of collateral for nonaircraft medium- and long-term loan guarantee transactions. Establish mechanisms to oversee compliance with Ex-Im’s existing procedures, prior to loan guarantee approval, for (1) obtaining credit reports for borrowers or documenting why they were not applicable, (2) documenting certain eligibility procedures, and (3) documenting the analysis of country exposure. Develop and implement detailed instructions, prior to loan guarantee approval, for (1) preparing and including all required documents or analyses in the loan file and (2) using credit reports in the risk assessment and due diligence process. Update the Character, Reputational, and Transaction Integrity review process to include the search of databases to help identify transaction applicants with delinquent federal debt that would then not be eligible for loan guarantees. As of April 2015, Ex-Im has revised its Loan, Guarantee, and Insurance Manual in response to the first three recommendations from our September 2014 report. We consider the second and third of these recommendations to be implemented and are taking actions to close them. With respect to the first of these recommendations, we are continuing to review Ex-Im’s actions. In addition, Ex-Im officials have stated that they have been working with the Department of the Treasury on the fourth recommendation to determine the technical feasibility of an automated method to access a Treasury database to verify that applicants are not delinquent on federal debt. We are currently reviewing Ex-Im’s actions related to this recommendation. Our September 2014 report additionally found weaknesses in Ex-Im’s documentation of aspects of its underwriting and overall procedures related to fraud. We found that Ex-Im had not documented its risk-based approach for scheduling examinations to monitor lenders with delegated authority to approve guaranteed loans. In addition, while Ex-Im had processes to prevent, detect, and investigate fraud, it had not documented its overall fraud processes. Such documentation is recommended by several authoritative auditing and antifraud organizations. We therefore recommended that Ex-Im document: its risk-based approach for scheduling delegated authority lender examinations, and its overall fraud-prevention process, including the roles and responsibilities of Ex-Im divisions and officials that are key participants in Ex-Im’s process. As of April 2015, Ex-Im has revised its Loan, Guarantee, and Insurance Manual to further document its approach and has documented its overall processes related to fraud, including describing the roles and responsibilities of Ex-Im divisions and officials that are key participants in these processes. Therefore we consider these recommendations to be implemented and are taking actions to close them. Our August 2014 annual report on Ex-Im’s monitoring of dual-use exports also found weaknesses in Ex-Im’s documentation of required procedures. We found that Ex-Im had received some but not all of the information it required in its credit agreements regarding the three dual- use transactions it financed in fiscal year 2012, and that some of the information it had received was late. As a result, we found that Ex-Im did not have complete and timely information about whether the items were actually being used in accordance with the terms of the agreements and Ex-Im policy. We recommended that Ex-Im establish steps that staff should take in cases where borrowers do not submit required end-use documentation within the time frames specified in their financing agreements and ensure that these efforts are well documented. In response to our recommendation, Ex-Im revised its 1997 memorandum on the implementation of its dual-use policy for military applications to provide more specific guidance and disseminated the revised memo to relevant staff. During our current annual review of Ex-Im’s dual-use financing, we are following up with Ex-Im to see how this revised guidance is being implemented. In two May 2013 Ex-Im reports, we reported weaknesses in how Ex-Im estimated its future exposure, and we reported the limitations in its calculations of the number of jobs its financing supports. We made two recommendations related to how Ex-Im prepares forecasts and one recommendation on its reporting jobs impact reporting. Ex-Im agreed with all three recommendations and took actions to address them. In our May 2013 report on Ex-Im’s exposure and resources, we found weaknesses in the methodology Ex-Im used to forecast future financial Although Ex-Im’s forecast model is sensitive to key exposure levels.assumptions, Ex-Im had not reassessed these assumptions to reflect changing conditions, nor had it conducted sensitivity analyses to assess and report the range of potential outcomes. We made two recommendations to Ex-Im: (1) that Ex-Im compare previous forecasts and key assumptions to actual results and adjust its forecast models to incorporate previous experience and (2) that Ex-Im assess the sensitivity of the exposure forecast model to key assumptions and estimates and identify and report the range of forecasts based on this analysis. Ex-Im put in place new methodologies for its 2015 budget estimates. Specifically, Ex-Im compared the results of its existing authorization forecast method with actual results and enhanced its calculation of expected repayments and authorizations by incorporating historical experience into the methodology. Additionally, Ex-Im created statistical models to validate its forecasts and provide a range of estimates. Therefore, we consider these two recommendations implemented and closed. GAO-13-446. example, the employment data are a count of jobs that treats full-time, part-time, and seasonal jobs equally. In addition, Ex-Im’s calculations assume that the firm receiving Ex-Im support uses the same number of jobs as the industry-wide average, but Ex-Im’s clients could be different from the typical firm in the same industry. Ex-Im did not report these limitations or fully detail the assumptions related to its data or methodology. We recommended that Ex-Im improve reporting on the assumptions and limitations in the methodology and data used to calculate the number of jobs Ex-Im supports through its financing. Ex-Im’s 2013 and 2014 annual reports included greater detail on these issues; therefore, we consider this recommendation implemented and closed. In conclusion, our reviews of Ex-Im since the 2012 Reauthorization Act have identified a number of areas in which Ex-Im could improve its operations. Ex-Im has shown a willingness to reexamine its operations, agreeing with all of our recent recommendations and implementing a number of them. However, managing a large export financing portfolio with its wide variety of associated risks is challenging. Therefore, to sustain the improvements it has made and address emerging challenges, it will be important for Ex-Im to effectively implement remaining audit recommendations and carefully manage risks in the evolving global financial marketplace. Chairmen Jordan and Huizenga, Ranking Members Cartwright and Moore, and Members of the Subcommittees, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact me at 202- 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Celia Thomas, Assistant Director; Kathryn Bolduc; Marcia Carlsen; Michael Simon; and Steve Westley. The Chairman of the Export-Import Bank of the United States should direct the appropriate officials to develop and implement procedures, prior to loan guarantee approval, for (1) verifying that transaction applicants are not delinquent on federal debt, including using credit reports to make such a determination, and (2) performing assessments of collateral for nonaircraft medium- and long-term loan guarantee transactions. The Chairman of the Export-Import Bank of the United States should direct the appropriate officials to establish mechanisms to oversee compliance with Ex-Im’s existing procedures, prior to loan guarantee approval, for (1) obtaining credit reports for transaction borrowers or documenting why they were not applicable; (2) documenting certain eligibility procedures, including the Character, Reputational, and Transaction Integrity reviews for medium- and long- term loan guarantee transactions, export item eligibility, and country eligibility; and (3) documenting the analysis of country exposure. The Chairman of the Export-Import Bank of the United States should direct the appropriate officials to develop and implement detailed instructions, prior to loan guarantee approval, for (1) preparing and including all required documents or analyses in the loan file and (2) using credit reports in the risk assessment and due diligence process. The Chairman of the Export-Import Bank of the United States should direct the appropriate officials to Update the Character, Reputational, and Transaction Integrity review process to include the search of databases to help identify transaction applicants with delinquent federal debt that would then not be eligible for loan guarantees. The Chairman of the Export-Import Bank of the United States should direct the appropriate officials to document Ex-Im’s current risk-based approach for scheduling delegated authority lender examinations. The Chairman of the Export-Import Bank of the United States should direct the appropriate officials to document Ex-Im’s overall fraud process, including describing the roles and responsibilities of Ex-Im divisions and officials that are key participants in Ex-Im’s fraud processes. To ensure adequate and consistent oversight for monitoring the end use of dual-use items, the Chairman of the Export-Import Bank of the United States should strengthen Ex-Im guidance for monitoring end use. Specifically, Ex-Im should establish steps staff should take in cases where borrowers do not submit required end-use documentation within the time frames specified in their financing agreements and ensure that these efforts are well documented. To provide Congress with the appropriate information necessary to make decisions on Ex-Im’s exposure limits and targets and to improve the accuracy of its forecasts of exposure and authorizations, the Chairman of the Export-Import Bank of the United States should compare previous forecasts and key assumptions to actual results and adjust its forecast models to incorporate previous experience. To provide Congress with the appropriate information necessary to make decisions on Ex-Im’s exposure limits and targets and improve the accuracy of its forecasts of exposure and authorizations, the Chairman of the Export-Import Bank of the United States should assess the sensitivity of the exposure forecast model to key assumptions and authorization estimates and identify and report the range of forecasts based on this analysis. To help Congress and Ex-Im management understand the performance and risk associated with its subportfolios of transactions supporting the small business, sub-Saharan Africa, and renewable energy mandates, Ex-Im should routinely report financial performance information, including the default rate and risk rating, of these transactions at the subportfolio level. To better inform Congress of the issues associated with meeting each of the bank’s percentage-based mandated targets, Ex-Im should provide Congress with additional information on the resources associated with meeting the mandated targets. To ensure better understanding of its jobs calculation methodology, the Chairman of Ex-Im Bank should increase transparency by improving reporting on the assumptions and limitations in the methodology and data used to calculate the number of jobs Ex-Im supports through its financing. To help improve the reliability of its loss estimation model, the Chairman of the Export-Import Bank of the United States should assess whether it is using the best available data for adjusting loss estimates for longer-term transactions to account for global economic risk. To conduct future analysis comparing the performance of newer and older business and to make future enhancements to its loss estimation model, the Chairman of the Export-Import Bank of the United States should retain point-in-time, historical data on credit performance. To help Congress better understand the financial risks associated with Ex-Im’s portfolio, the Chairman of the Export-Import Bank of the United States should report its stress test scenarios and results to Congress when such information becomes available. To help manage operational risks stemming from Ex-Im’s increased business volume, the Chairman of the Export-Import Bank of the United States should develop workload benchmarks at the agencywide and functional area levels, monitor workload against these benchmarks, and develop control activities for mitigating risks when workloads approach or exceed these benchmarks. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As the export credit agency of the United States, Ex-Im helps U.S. firms export goods and services by providing financing assistance, including direct loans, loan guarantees, and insurance. Following the 2007 to 2009 financial crisis, Ex-Im's authorizations and financial exposure both increased rapidly. To strengthen Ex-Im, Congress mandated several reform measures in the Export-Import Bank Reauthorization Act of 2012 and also required certain reviews and reports by GAO and others. Since March 2013, GAO has issued four reports mandated by the act ( GAO-13-303 , GAO-13-446 , GAO-13-620 , and GAO-14-574 ). In addition, in August and July 2014, GAO reported on Ex-Im's financing of exports with potential dual military and civilian uses and provided information on aircraft financing by Ex-Im and other countries' export credit agencies, respectively ( GAO-14-719 and GAO-14-642R ). This testimony summarizes the findings and recommendations in those six recent reports, and provides updated information on the status of Ex-Im's actions taken to address GAO's recommendations. To update the status of its recommendations, GAO reviewed Ex-Im's modified and updated procedures and documentation and interviewed Ex-Im officials. GAO is not making any new recommendations in this testimony. In six reports on the U.S. Export-Import Bank (Ex-Im) issued since March 2013, GAO presented findings and made 16 recommendations to improve Ex-Im's operations, summarized in this testimony in three broad areas: (1) risk management, (2) underwriting and fraud prevention, and (3) forecasting its exposure and reporting on its estimates of its impact on U.S. jobs. Six of GAO's recommendations focus on improving Ex-Im's management of risks related to its overall portfolio. For example, in March and May 2013, GAO recommended addressing weaknesses in Ex-Im's model for estimating losses, data retained to analyze default risks, reporting of portfolio stress testing, and analysis of staff resources. Ex-Im has implemented all 6 of these recommendations. In September 2014, GAO found that Ex-Im had implemented many key aspects of its underwriting process but identified weaknesses in the design, implementation and documentation of some procedures. For example, GAO found that Ex-Im did not have mechanisms to verify compliance with certain loan guarantee eligibility procedures and had not documented its overall processes related to fraud. Ex-Im has implemented 4 of the 6 recommendations in this report. It has not fully implemented 2 recommendations concerning assessing collateral on certain transactions and verifying that applicants are not delinquent on federal debt. GAO's August 2014 report on Ex-Im's transactions involving exports with potential dual military and civilian uses also found documentation weaknesses and made one recommendation. GAO is reviewing the status of Ex-Im's actions in the context of GAO's ongoing dual use review. Finally, in May 2013, GAO found weaknesses in how Ex-Im forecasts its aggregate outstanding amount of financing (exposure) and how it reports estimates of its impact on U.S. jobs. GAO recommended that Ex-Im (1) adjust its exposure forecast model to incorporate previous experience and (2) assess and report the model's sensitivity to key assumptions. GAO also recommended that Ex-Im improve reporting on the assumptions and limitations in its methodology and data for calculating the number of jobs it supports through its financing. Ex-Im has implemented GAO's 3 recommendations.
PBL is a method of providing support for weapon systems by designating what system performance is required, such as a given level of system availability, and placing the responsibility for how it is accomplished on the support provider, which manages resources to achieve performance objectives. Logistics support for almost all of DOD’s weapon systems, such as materiel management, maintenance, and engineering, is provided by a combination of government and private-sector sources. In the past, under traditional support arrangements, the government generally managed the provision of weapon system support, using a combination of support providers from the government and the private sector. PBL support arrangements often use a private-sector support integrator to manage support providers from both the public and private sectors to meet specified performance requirements. PBL evolved from performance-based service contracting, which has been used in both the public and private sectors. The Federal Acquisition Regulation defines performance-based contracting as structuring all aspects of an acquisition around the purpose of the work to be performed. The Federal Acquisition Regulation further defines the statement of work for a performance-based acquisition as describing the required results in clear, specific, and objective terms with measurable outcomes. Performance-based service contracting has been referenced in regulation, guidance, and policy for more than two decades, and federal agencies have used it to varying degrees for acquiring a range of services. In 1991 the Office of Management and Budget issued a policy letter establishing the use of a performance-based approach for service contracting, and in 1994 it initiated a governmentwide pilot project to encourage the use of performance-based service contracts in federal agencies, including DOD. In October 1997, the Federal Acquisition Regulation was amended to incorporate the Office of Management and Budget’s 1991 policy. The Federal Acquisition Regulation currently establishes a policy that agencies use performance-based contracting methods to the maximum extent practicable for the acquisition of services, with certain exceptions. Using performance-based service contracts is intended to offer a number of potential benefits, such as encouraging contractors to be innovative and to find cost-effective ways of delivering services for a fixed level of funding. By shifting the focus from process to results, these contracts can potentially produce better outcomes and reduced costs. The Office of Management and Budget reported that the agencies participating in the pilot reduced contract prices and improved customer satisfaction with contractor work after introducing performance-based contracting. As an approach for supporting military weapon systems, PBL emerged from a 1999 DOD study to test logistics reengineering concepts that placed greater reliance on the private sector for providing weapon system support to reduce support costs and improve weapon system performance. The goal was for the military departments to reduce costs and improve efficiency by pursuing logistics support “reengineering” efforts using contractors. The fiscal years 2001-2005 Defense Planning Guidance advanced this cost reduction effort by establishing a goal that directed each military department to reduce the operation and support costs of its fielded systems by 20 percent by the year 2005. During this time, the Under Secretary of Defense (Acquisition and Technology) directed the services to use an existing pilot program containing 30 weapon systems to demonstrate the type of cost savings depicted in the fiscal years 2001-2005 Defense Planning Guidance. The areas identified for potential cost savings were reducing demand on the supply chain by improving the reliability and maintainability of the equipment, reducing supply chain response time, and increasing competitive sourcing of product support. Some of the 30 pilot programs involved performance- type arrangements that the services subsequently converted to, or designated as, PBL arrangements. This emphasis on reducing costs through PBL implementation was reiterated in DOD’s 2001 Quadrennial Defense Review Report, which advocated the implementation of PBL to compress the supply chain by removing steps in the warehousing, distribution, and order fulfillment processes; to reduce inventories; and to decrease overhead costs while improving the readiness of major weapon systems and commodities. In November 2001, DOD identified PBL as the preferred weapons system support strategy. In May 2003, DOD further strengthened this emphasis on PBL by stating in a DOD policy directive that acquisition managers shall use performance-based strategies for sustaining products and services whenever feasible and PBL strategies shall optimize total system availability while minimizing cost and the logistics footprint. In concept, a properly structured PBL arrangement is supposed to provide a level of performance and also reduce costs over time. According to the DOD/Defense Acquisition University PBL guide, a key aspect of PBL is the inclusion of an incentive for the support provider to reduce costs through increased reliability. Further, PBL arrangements can inherently motivate support providers to improve component and system reliability, since such improvements can provide the foundation for increased profit over the long term. In other words, the support provider should have the incentive to make reliability improvements to ensure that performance metrics are met and also to increase profit by earning a performance incentive tied to the metrics (an award fee or award term) and by reducing costs while still being paid the agreed-upon, fixed price for the remainder of the contract. The DOD/Defense Acquisition University PBL guide also states that a critical element of the PBL arrangement that facilitates this incentive and motivation is contract length. Further, long-term contracts provide the support provider with confidence in continuing cash flows and provide sufficient time for receiving an adequate return on any investments made to improve reliability. In 1995, before DOD identified PBL as the preferred weapons system support strategy, DOD’s economic analysis instruction recommended using an economic analysis for evaluating options for weapon system support. The instruction stressed the importance of considering in the analysis both qualitative and quantitative factors. With respect to quantitative factors, the instruction recommended that costs and benefits be expressed in terms of net present value to account for the time value of money. These were also to be expressed as life cycle costs and benefits that were to be calculated and compared for each feasible alternative for meeting a given weapon system support objective. Specifically, the economic analysis instruction identified and characterized the following seven elements that should be present in an economic analysis: objectives—to clearly identify the function to be accomplished and not to assume a specific means of achieving a desired result; assumptions—to incorporate both actual data and future alternatives—to comprise a comprehensive list of the feasible and infeasible options followed by a discussion of the infeasible options and comparisons of only the feasible options; costs and benefits—to compare the quantitative (expressed in terms of net present value) and qualitative factors for each option; sensitivity and uncertainty (risk) analyses—to determine the effect of uncertainties on the results of the analysis and to provide a range of costs and benefits; summary of the results of the analysis; and summary of the analysis’s recommendations. In DOD’s 2001 PBL guide, in which DOD identified PBL as the preferred weapon system support strategy, the department recommended that for all new systems and fielded acquisition category I and II systems, program offices use an analytical tool called a business case analysis to support the decision to use PBL arrangements for weapon system support. In 2004 and 2005, DOD guidance on conducting business case analyses described this tool in less specific terms than those used to describe the criteria laid out in DOD’s economic analysis instruction. However, there are some common themes in the guidance and instruction. For example, a January 2004 Under Secretary of Defense (Acquisition, Technology and Logistics) memorandum on PBL business case analysis calls for an assessment of “best value,” or the greatest overall benefit. The DOD/Defense Acquisition University PBL guide repeats the characterization of the business case analysis as a tool with the goal of determining a best-value solution and suggests that at a minimum a business case analysis should include an introduction outlining the purpose and objectives of the program, an explanation of the methods and assumptions used in the analysis, calculations of the relative costs and benefits of each weapon system support option, the financial and nonfinancial impacts of each, a risk assessment, and a section with conclusions and recommendations. Finally, both DOD’s economic analysis instruction and the DOD/Defense Acquisition University PBL guide recommend documenting the results of the analysis, including all calculations and sources of data, down to the most basic inputs, to provide an auditable and stand-alone document. According to the guidance, a business case analysis must stand on its own and be able to withstand rigorous analysis and review by independent agencies. DOD’s 2004 and 2005 guidance on conducting business case analyses also recommended that program offices update their business case analyses at key decision points both to validate the approach taken and to support future plans and use certain criteria, such as the capability of a PBL arrangement to reduce the cost per operational unit of performance (i.e., cost per flight hour), to assess all acquisition I and II programs without plans for a PBL arrangement for the potential application of a PBL strategy at the system, subsystem, or major assembly level. If the assessment showed potential for a PBL arrangement, a business case analysis should be conducted and completed by the September 30, 2006, deadline required by DOD’s Strategic Planning Guidance. In our review, we looked at PBL arrangements initiated as early as 1996 (when performance-based contracting was encouraged governmentwide) and as recently as 2007 (by which time, at the DOD level, PBL arrangements had moved from being encouraged to being required whenever feasible). These PBL arrangements represent contract values totaling approximately $12.1 billion and range from a low of $10.5 million to a high of $4.9 billion. Table 1 lists, by service, the weapon system programs supported by the 29 PBL arrangements we reviewed. DOD has generally not used business case analyses consistently or effectively to influence decision making regarding the use of PBL. Although DOD guidance recommended that these analyses be used to guide decisions on the cost-effectiveness of weapon system support arrangements, about half of the programs we reviewed either did not conduct such an analysis or did not retain adequate supporting documentation. Further, most of the remaining programs in our sample used analyses that were not comprehensive. For example, some analyses did not evaluate alternative support options and most did not contain all of the elements recommended in DOD’s economic analysis instruction. Additionally, analyses were often not updated to support decision making after PBL implementation in accordance with service policies and guidance. The key reasons for DOD’s ineffective use of business case analyses to support PBL decision making are that (1) DOD has not required such analyses or provided specific criteria for conducting them and (2) the services’ internal controls have been inadequate to ensure that the analyses are performed and updated. As a result, DOD cannot ensure that decisions regarding weapon system support options are guided by comprehensive, consistent, and sound analytical tools. Further, the department cannot be sure that the support arrangements being implemented will result in the most cost-effective support program. For 9 of the 29 PBL arrangements we reviewed, a business case analysis had not been completed. Additionally, for 6 others, program officials could not provide supporting details of the business case analysis they told us that they had conducted. When program offices did not conduct business case analyses as part of their PBL decision making, as we found for many of the programs we reviewed, the department cannot be sure that the support arrangements implemented will result in the most cost-effective support program. Table 2 provides the number of PBL arrangements we reviewed by service that were initiated with and without the use of a business case analysis. Although both of the Marine Corps programs we reviewed conducted a business case analysis, about 50 percent of Air Force programs, 22 percent of Army programs, and 25 percent of Navy programs did not. In general, the Air Force programs only developed a source-of-repair analysis, which evaluated only the repair element of weapon system support. For the two PBL arrangements for which the Army did not conduct an analysis—the Apache sensor and airframe arrangements—the Deputy Assistant Secretary of the Army (Integrated Logistics Support) approved the Apache program office’s request for a waiver from Army business case analysis policy based on prior analyses and program decisions. However, the U.S. Army Audit Agency reported that the prior analyses did not consider all components included in the two PBL arrangements, other support strategies, performance metrics, and relative costs. According to an Army official, a business case analysis for the airframe components is being conducted and is expected to be completed in December 2008, and efforts to develop a business case analysis for the program’s sensors are expected to begin in November 2008. The F-22A Raptor and KC-130J are examples of programs where the Air Force and Navy did not conduct a business case analysis as part of the PBL decision-making process. When DOD recommended in 2001 that program offices fielding new systems base PBL arrangement decisions on business case analyses, the F-22 was beginning low-rate initial production. In 2002, the Assistant Secretary of the Air Force (Acquisition) and the Air Force Deputy Chief of Staff (Logistics, Installations and Mission Support) directed the F-22 program office to develop a long-term support strategy and manage life cycle product support through a PBL arrangement that includes government-contractor partnerships as necessary. From 2003 through 2007, the program office acquired support as part of the aircraft’s production contract, and in 2008 the office signed a separate PBL support contract with Lockheed Martin, one of the original equipment manufacturers, to support the aircraft from 2008 to 2012. In March 2008, program officials said that they did not conduct a business case analysis before awarding the 2008 contract because current program data, such as material usage rates and costs, are immature. Officials planned to conduct an analysis in 2010 or 2011 when it could be completed using more meaningful data. However, program officials subsequently decided that the available data were sufficient and in July 2008 awarded a contract to develop a business case analysis. Completion of the analysis is expected in late 2009. In 2002, the Navy contracted for a PBL arrangement to support the Marine Corps’ KC-130J engines without first preparing a business case analysis. Program officials explained that the decision was made not to develop an analysis because the technical data needed to repair the engines were not available and the Marine Corps did not have the infrastructure necessary to support the system. Officials also said that a market analysis was conducted prior to implementing the PBL arrangement, but they could not provide a copy of the analysis. Nonetheless, a business case analysis that documented the results of the market analysis and the program’s negotiated costs versus expected flight hours and anticipated repairs and maintenance could have been used to monitor the actual results and cost- effectiveness of the performance-based approach, especially since support for the engines is obtained under a commercial contract and the contractor does not provide detailed cost data to the government. Program officials from 20 of the PBL arrangements we reviewed told us that they had conducted business case analyses before implementing the arrangements; however, officials for 6 programs could not provide all or some of the data to support the analyses. According to DOD’s economic analysis instruction, the results of the analysis, including all calculations and sources of data, must be documented down to the most basic inputs to provide an auditable and stand-alone document. Table 3 lists by service the number of PBL arrangements for which all of the business case analysis documentation was retained and those for which it was not. In general, program officials for six programs said that they were unable to locate all the details of their analyses; however, the amount of data that was missing varied. For example: Although officials for the Army’s Common Ground Station said that an analysis was performed in 2002, they were unable to provide any details of the analysis or the results. While program officials for the Army’s Shadow Tactical Unmanned Aircraft System were able to provide the results of their 2002 analysis, they did not retain the details regarding the assumptions, data sources, or calculations used to develop the analysis. However, program officials said that the analysis was developed early in the life cycle of the program and was not based on historical cost and maintenance data, and therefore they did not consider it to be very accurate based on actual program results that have occurred since the analysis was developed. For the Army’s Javelin PBL arrangement, the DOD Office of the Inspector General reported in 2005 that it was unable to validate the program office’s 2001 analysis because the program office was not able to provide adequate documentation. The documentation has not been located, and program officials were only able to provide a summary of the results of the analysis. Service program officials could provide documentation of the business case analyses conducted for 14 of the PBL arrangements we reviewed, but all but 1 of the 14 analyses were missing one or more of the elements recommended in DOD’s economic analysis instruction. As a result, decisions regarding weapon system support options for many of the programs we reviewed were not guided by comprehensive, consistent, and sound economic analysis. Further, the department cannot be sure that the support arrangements implemented will result in the most cost-effective support programs. Figure 1 shows which elements were missing from the 14 business case analyses. For three PBL arrangements, the business case analyses did not compare alternative support options and either evaluated only a single option for weapon system support or evaluated contracting strategies instead of alternative support arrangements. For example, the 2007 business case analysis for the Joint Surveillance and Target Attack Radar System did not analyze the costs and benefits of alternative support strategies for the program. The business case analysis was developed in response to a 2006 recommendation from the DOD Office of the Inspector General after a review found that the program office had not evaluated alternative support strategies prior to implementing a PBL arrangement in 2000. However, the 2007 analysis covered only the potential impacts of changing the type of contract used to obtain support for the program from cost plus award fee to firm fixed price. The business case analysis for the B-2 also did not analyze alternative support strategies but focused on potential efficiencies available to the current program through funding consolidation, funding stability, and long-term contracting. According to program officials, the only assumption in the analysis that actually occurred to some extent after PBL implementation was funding consolidation. Finally, although the C-17 program office developed a business case analysis in 2003, the DOD Office of the Inspector General reported in 2006 that the analysis focused only on one support option and did not evaluate the costs and benefits of multiple support options. In 2007, a contract was awarded for development of another business case analysis planned for completion prior to awarding the next C-17 support contract. Two other important elements missing from some analyses were an evaluation of costs over the remaining life cycle of the program and the calculation of net present value to account for the time value of money. For example, the analysis for the Patriot PBL arrangement only evaluated the costs and benefits over a 3-year period. On the other hand, while the business case analysis for the Assault Breacher Vehicle evaluated costs over the 20-year life cycle of the program, the net present value of the alternatives was not calculated. Four other business case analyses—those prepared for the F/A-18 E/F, AV-8B Harrier, Close-In Weapon System, and Consolidated Automated Support System—did not include these two elements and several others, such as sensitivity or risk analysis. These analyses were prepared in a similar format by the Naval Inventory Control Point, an organization that provides supply support to the Navy, Marine Corps, and others. We conducted a net present value analysis on the amounts contained in the Naval Inventory Control Point’s business case analysis for the F/A-18 E/F PBL arrangement and found that the PBL option it chose was about $1.7 million more expensive than the alternative option. Its analysis, which did not use net present value, found that the PBL option was about $277,000 less expensive. The Naval Inventory Control Point’s philosophy is that if the costs of PBL are equal to or less than the costs for government-provided support, a PBL arrangement will be used. Therefore, if Naval Inventory Control Point officials had conducted a net present value analysis, based on this decision criterion, they would not have implemented the PBL arrangement. According to Naval Inventory Control Point officials, there is confusion in the Navy regarding the purpose of the analyses they prepare. Officials said that the analyses were conducted for internal decision making and were not intended to satisfy Navy PBL policy, which places responsibility for development of a life cycle business case analysis on the weapon system program office. However, program officials for the Close-In Weapon System, Harrier, and Consolidated Automated Support System did not develop business case analyses that evaluated PBL implementation over the remaining life cycle of the system. Several other factors affected the quality of the business case analyses we reviewed. Most of the analyses we reviewed did not identify and quantify the benefits that could be expected from contractor incentives to increase reliability or improve processes to reduce support costs. The only business case analysis that specifically identified cost savings based on contractor incentives to reduce costs was the 2005 life cycle business case analysis for the F/A-18 E/F. The life cycle analysis included an estimate of future cost savings associated with investments the contractor was willing to make under the arrangement to reduce supply chain costs. In addition, most of the analyses did not recognize or quantify the costs associated with the transfer of risk that occurs under a performance-based support arrangement. According to the DOD/Defense Acquisition University PBL guide, PBL arrangements transfer responsibility for making support decisions—and corresponding risk—to the support provider, and risk is one of the major cost drivers for contractors. Therefore, the use of performance metrics could introduce a large element of risk for the contractor that may be built into the costs of such an arrangement. In general, many of the business case analyses we reviewed simply estimated the costs for contract logistics support and government-provided support. One exception was the business case analysis for the Marine Corps’ Assault Breacher Vehicle, which attempted to address the costs of risk transfer and the benefits of contractor incentive to reduce costs by estimating the costs for a traditional contractor logistics support arrangement and a performance-based contractor logistics support arrangement, in addition to estimates for government-provided support and performance-based, government-managed support. Another business case analysis was based on questionable assumptions. The 2002 business case analysis for the Sentinel program estimated that the cost for the government depot to overhaul the system was 50 percent of the total cost of the system. The business case analysis estimated that for the alternative option—a partnership between the government and the contractor with the government depot performing enough work to meet the system’s core requirements—the cost for an overhaul was only 25 percent of the system’s cost. The analysis assumed that under the partnership option the overhaul cost less because, instead of complete disassembly and parts replacement, the system would be inspected and repaired only as necessary. However, according to an official at the Army depot that would perform the overhaul, the depot also could have used the inspect and repair concept as the basis for its maintenance work. Therefore, this assumption in the business case analysis may have overstated the costs for the government depot to perform the work. Lastly, the Naval Air Systems Command’s 2005 life cycle business case analysis for the F/A-18 E/F estimated that over the 28-year life cycle of the program, PBL support costs were $76 million more expensive than costs for government-provided support. However, the business case analysis estimated that Naval Inventory Control Point surcharges would be $325 million less for the PBL arrangement. The Naval Inventory Control Point adds a surcharge to the cost of goods sold to its customers (including weapon system program offices) to recoup its expenses, but according to officials, they do not adjust their resources based on PBL implementation and would still need to recoup their expenses through surcharges to other customers. Therefore, while the F/A-18 program office may realize a $325 million benefit from the reduced surcharge, the overall costs to the Navy may remain the same. Including this reduced amount in the analysis is inconsistent with DOD’s economic analysis instruction, which states that all measurable costs and benefits to the federal government should be included in the analysis. In addition, in the 2006 business case analysis prepared by the Naval Inventory Control Point, which estimated supply chain management costs for both PBL and government-provided support for a 5-year period from 2006 through 2011, the Naval Inventory Control Point’s costs were estimated to remain the same under either option. If the Naval Inventory Control Point’s costs had been the same in the life cycle business case analysis prepared by the Naval Air Systems Command for the F/A-18 E/F, the PBL arrangement would be $401 million more expensive than government support for the 28-year period. In 2004, DOD guidance recommended that business case analyses continue throughout the weapons system’s life cycle and be updated at key decision points both to validate the approach taken and to support future plans. The services, with the exception of the Air Force, have also issued policies on conducting such updates. However, even when business cases were prepared, we found that program offices often did not update them in accordance with DOD’s guidance and service policies, nor did the program offices validate them against actual support costs for decision making after PBL implementation. Neither DOD nor the services has issued guidance that specifies what should occur when updating or validating a business case analysis. Army policy states that PBL business case analyses shall be updated prior to the exercise of an option period when there are significant changes to the performance period/terms of the contract or evaluation period, or updated whenever there are major programmatic changes or at least every 5 years. Program offices for four of the Army PBL arrangements we reviewed had not updated their business case analyses as called for by Army policy. For example, the Tube-launched Optically-tracked Wire- guided missile – Improved Target Acquisition System program office developed a business case analysis in 1998 before awarding the original PBL contract and has not updated or validated the original analysis. Further, officials negotiated a follow-on PBL contract in 2007 after the terms of the original contract were complete. Although program officials for the Shadow Tactical Unmanned Aircraft System had planned to complete an update to their 2002 business case analysis by the end of 2007, the effort was delayed and the update is expected to be completed before the end of 2008. Although the Javelin program office implemented a PBL arrangement in January 2004, the business case analysis was developed in 2001, and program officials do not have plans to update the analysis. Additionally, although program office officials for the Army’s Sentinel weapon system had updated their 2002 business case analysis, Army policy calls for submission of the business case analysis to both the Office of the Deputy Assistant Secretary of the Army for Integrated Logistics Support and Army Materiel Command Headquarters for review and concurrence and then to the Program Executive Office for approval. In February 2008, before the new analysis had been reviewed and formally approved, another PBL contract was awarded. The Navy’s 2007 business case analysis guidance calls for updates every 3 to 5 years or when significant programmatic changes occur. Based on this policy, the T-45 program office should complete an update to its business case analysis for the PBL arrangement for support of the aircraft’s engines by 2008. Although program officials updated their 2003 analysis in 2006 with actual support costs and flying hours, they did not expand the analysis to account for the remaining life cycle of the engines. The analysis projected costs only through 2008, the original contract period. Program officials did not plan to further update the business case analysis or prepare another one because they believed that it was not required. Neither DOD’s nor the services’ policies clearly specify what should occur when a program office updates or validates a business case analysis. Although some programs are conducting another business case analysis, as mentioned earlier, program officials for the T-45 did not plan to conduct another analysis because they had updated their analysis with actual data. Program officials for the V-22 engine updated their 1998 analysis in 2004. The update focused on assessing if several of the ground rules, assumptions, and factors used in the original study were still valid and providing a preliminary recommendation on pursuing a follow-on PBL contract, from a cost standpoint. However, the entire analysis was not updated. Business case analyses were inconsistently used for PBL decision making because DOD did not require that the analyses be conducted and updated or provide specific criteria to guide their development. Further, with the exception of the Army, the services’ have not established effective internal controls to ensure that the analyses are prepared in accordance with service policies and guidance. As a result, DOD cannot ensure that decisions regarding weapon system support options are consistently guided by comprehensive and sound analytical tools. DOD guidance has not provided specific criteria for conducting and updating business case analyses for PBL decision making. Despite DOD’s preexisting economic analysis instruction recommending the analysis of both quantitative and qualitative factors, in 2001 DOD recommended the development of a business case analysis prior to implementing a PBL arrangement but provided little criteria for conducting such an analysis. In 2003, a Defense Business Board study recommended that DOD issue standard guidance for the services to take a more consistent approach to PBL decision making and also require the use of business case analyses. In response, the January 2004 Under Secretary of Defense (Acquisition, Technology and Logistics) memorandum was issued containing “guiding principles” for business case analyses. The memorandum stated that business case analyses “will evaluate all services or activities needed to meet warfighter performance requirements using ‘best value’ assessments.” This memorandum also listed several quantitative and qualitative factors for consideration when developing a business case analysis; however, it did not indicate how these factors were to be evaluated or their relative importance in the decision-making process. The memorandum also recommended that business case analyses be updated or repeated to validate the approach taken or to support future plans, but did not provide specific guidance as to when such updates should occur. According to the memorandum, a DOD PBL business case analysis handbook was supposed to be forthcoming. Later that year, the DOD/Defense Acquisition University PBL guide was published. It had two pages dedicated to the business case analysis concept—providing additional criteria and also incorporating the guiding business case analysis principles. However, a handbook specifically for PBL business case analyses was never issued. In 2003, when DOD incorporated PBL implementation into DOD Directive 5000.1, which provides mandatory policies for all acquisition programs, a requirement to conduct and update a business case analysis was not included. Specifically, the directive only stated that acquisition managers shall use performance-based strategies for sustaining products and services whenever feasible and that such PBL strategies shall optimize total system availability while minimizing cost and logistics footprint. Also, despite the Defense Business Board’s recommendation later that same year to require the use of business case analyses, DOD subsequently neither required program managers to prepare the analyses prior to PBL implementation nor required them to update the analyses after implementation. In fact, although most of the services have issued some guidance and requirements for business case analyses, the current Defense Acquisition Guidebook no longer specifically refers to a business case analysis, but rather recommends the development of a “support strategy analysis” as part of the PBL implementation process. According to the guidebook, the support strategy analysis can be a business case analysis, economic analysis, decision-tree analysis, or other best-value-type assessment. Another reason for the inconsistent use of business case analyses is that the services’ policies and guidance for conducting the analyses were slow to develop and were generally not enforced because of a lack of effective internal controls. Moreover, we found inconsistencies among the services’ policies and guidance. In response to DOD’s recommendation that program offices conduct a business case analysis prior to implementing a PBL arrangement for weapon system support, the services issued their own policies and guidance. The time frames for these are summarized in table 4. Although DOD recommended the use of business case analyses in 2001, the services’ business case analysis policies and guidance have evolved over time. In some cases, guidance was not issued until years later. For example, Marine Corps policy did not call for PBL business case analyses until 2007. Further, although the Air Force included business case analyses among mandatory procedures in 2004, these procedures were not specific. The Air Force’s instruction states only that “the program manager is responsible for construction of a business case analysis to determine the best strategy for meeting the PBL goals.” Final Air Force guidance for business case analyses, including PBL business case analyses, was not issued until 2008. As another example, the Army’s early business case analysis guidance was general in nature, and more specific policy memorandums were issued in 2005 and 2006. In 2007, these policies were included in an Army regulation. Currently, service policies and guidance vary with respect to which programs should implement PBL arrangements, which of those programs shall conduct business case analyses, and how often program managers should update their business case analyses. Until 2007 the services’ policies and guidance varied significantly. However, the issuance of Navy guidance and Marine Corps policy in 2007 resulted in more consistency. Table 5 summarizes the services’ business case analysis policies and guidance. With the exception of the Army, the services have not established the internal controls, including a review and approval process, necessary to ensure that business case analyses are conducted prior to PBL implementation and updated after implementation. For example, the Navy’s 2003 guidance assigns responsibility for reviewing individual business cases analyses to the system commands’ cost departments. However, the review only occurs when requested. Although a recently issued Air Force instruction calls for a formal review of all business case analyses, including those conducted for PBL arrangements, that meet certain criteria, it is unclear how many PBL business case analyses will meet any of the criteria and be subject to this review. The 2008 Air Force instruction calls for a review of all business case analyses that will be (1) forwarded outside of the Air Force; (2) forwarded to senior Air Force officials, such as the Secretary of the Air Force; and (3) provided for weapon systems that require Defense Acquisition Board or Air Force Acquisition Board approval. In contrast, Army policy states that program managers shall report semiannually on the status of PBL implementation and that business case analyses for acquisition category I and II programs should be submitted for review and verification to multiple offices—including Army headquarters, the Army Materiel Command, and the Office of the Deputy Assistant Secretary of the Army (Cost and Economics)—and for approval to the Army Acquisition Executive. In addition, business case analyses for lower-level programs should be reviewed and approved but will not verified by the Office of the Deputy Assistant Secretary of the Army (Cost and Economics), and the approval authority is the program executive officer or commander of the related life cycle management command. While the Army’s policy first provided for these internal controls in 2005, Army officials said that no programs have yet passed the review and approval process completely. The extent to which PBL arrangements are reducing costs for weapon system support is unclear and generally remains undocumented even after several years of PBL implementation. A major difficulty in assessing the cost impact of PBL arrangements is the lack of detailed and standardized cost data maintained by the program offices. Various other factors, such as the lack of systems that are supported by both PBL and non-PBL support arrangements, the lack of sound program baseline information, and changing operational and materiel conditions, also limited our ability to assess the impact of PBL implementation on support costs. While the overall cost impact was unclear because of a lack of data and these other factors, the limited evidence on cost impact that was available showed mixed results. We did find some evidence that a few PBL arrangements have reduced costs. However, we also found that characteristics of DOD’s PBL support arrangements, such as short-term contracts and unstable program requirements and funding, may limit their potential to reduce costs. Further, DOD has not sufficiently emphasized the potential to reduce costs as a goal for PBL programs. As a result, DOD cannot be assured that PBL arrangements will reduce support costs and provide cost-effective support for DOD systems. In 2004, a memorandum from the Under Secretary of Defense (Acquisition, Technology and Logistics) recognized the importance of cost data for contract management and future cost estimating and price analysis and stated that PBL contracts shall include cost reporting requirements. However, for the PBL arrangements we reviewed, program offices often did not have detailed cost data that would provide insights regarding what the program office was spending for various aspects of the support program—such as the cost of depot maintenance by subsystem and major component or the cost of engineering support, supply support, and transportation. When cost data were available, the level of detail and format of cost data varied considerably. This condition significantly affected our ability to determine the impact of the implementation of PBL on the costs of supporting the systems in our sample, as many factors influence support costs. For PBL arrangements using fixed-price contracts or fixed-price contract line items—DOD’s “ideal” type of PBL arrangement—we found that program offices generally did not receive detailed cost data for the program and only knew the overall amounts paid for support. Only two program offices in our sample obtained contractor support cost data for their fixed-price PBL arrangements, and the format and contents of the reports were very different. For example, the F/A-18 E/F program office obtained Boeing’s report on fiscal year 2006 costs, including general/administrative costs and profit, in a detailed reporting structure approved by the Office of the Secretary of Defense (OSD), Cost Analysis Improvement Group. According to program officials, an annual cost reporting requirement was included in the 2005 fixed-price PBL contract to provide cost visibility into the program and was added at no additional cost to the government. In contrast, the B-2 program office receives a monthly funds utilization report that allocates the amount the Air Force pays the contractor into seven high-level categories, such as planned depot maintenance. Although the PBL arrangements that used cost-reimbursable contracts generally obtained more detailed cost data than those with fixed-price contracts, the format and level of detail also varied. For example, under the 1998 PBL contract, the F-117 program office did not receive cost data in a format that was detailed enough to report in OSD’s standard support cost structure. The program office subsequently required more detailed cost data reporting from Lockheed Martin in the 2006 PBL contract. As another example, the 2003 C-17 PBL contract has both fixed-price and cost-reimbursable elements. According to program officials, Boeing did not report support cost data at the level of detail necessary to report in OSD’s support cost structure under the contract. According to an Air Force Cost Analysis Agency official, a cost-reporting requirement was included in the contract’s option years and more detailed cost reporting will begin in fiscal year 2009. Although cost data were generally lacking, the limited available evidence on cost impact showed mixed results. Data we reviewed for the two systems that were managed by both PBL and non-PBL arrangements indicate that the PBL arrangements were more costly, but based on other assessments of available data, there are some indications that PBL arrangements can reduce costs. However, in seven out of eight programs we reviewed where follow-on, fixed-price PBL contracts had been negotiated, expected cost reductions either did not materialize or could not be determined. Finally, we noted that officials reported performance levels for some programs that were significantly higher than required under the PBL arrangement, but it is unknown whether the required levels could be achieved at a lower cost. Of the 29 programs we reviewed, only the F100-PW-220 engine and the KC- 130J/C-130J airframes are maintained by both PBL arrangements and traditional government support strategies. We found that the Air Force’s traditional support arrangement for the F100-PW-220 engine costs slightly less than the Navy’s PBL arrangement for the same engine. The Navy uses the F100-PW-220 engines in its F-16A/B aircraft and sustains the engines through a PBL contract with Pratt & Whitney. The Air Force uses the same engines in its F-16 and F-15 aircraft and supports the engines at the Oklahoma City Air Logistics Center. The Air Force maintains an engine total ownership cost estimate that includes all costs incurred (depot-level repairables, general services division (expendable repair parts), depot programmed equipment maintenance, organizational-level maintenance, intermediate-level maintenance, and continuous improvement program). To compare the Navy’s PBL costs with the Air Force’s engine total ownership costs, we removed the costs associated with organizational- level maintenance from the Air Force’s costs. As shown in figure 2, converted to costs per flight hour, the Navy’s PBL costs were slightly higher than the Air Force’s costs in constant fiscal year 2008 dollars. Although the cost difference appears to be decreasing, the Navy’s 5-year contract ended in 2008 and a new PBL contract has not yet been negotiated. The engines are currently being supported under a 6-month extension to the original contract, and the fixed price paid per engine cycle is significantly higher than that paid during the previous 5 years. According to Navy officials, the decision to contract with Pratt & Whitney for the support of the Navy’s engines was not solely based on costs but was also based on other factors, such as turnaround time for engine repairs. However, program officials could not provide the data on which they based their decision. Elements of the Air Force’s PBL arrangement to support the C-130J airframe are more expensive than the support for the KC-130J airframe provided by the Navy. According to Navy officials, an analysis was prepared in 2005 to compare costs for alternative repair arrangements to determine whether to continue using the Navy’s repair capability or to transition to contractor-provided repair in 2006. The Navy’s analysis concluded that the support provided by the Naval Surface Warfare Center, Crane Division, would cost 43 percent less than the support provided by the contractor. The analysis was based on anticipated 2006 flight hours, actual government support costs from 2005, and the costs to exercise an option for repair under a preexisting contract. Additionally, we independently compared overall costs for inventory management and repair of repairable components and found that the Air Force’s PBL costs on a per flight hour basis for these elements were significantly higher than the Navy’s costs—approximately 131 percent higher in 2006 and 164 percent higher in 2007. However, according to officials, several factors account for some of the difference. For example, the Air Force’s PBL arrangement includes 36 percent more consumable and repair parts than the Navy’s support arrangement, maintenance of support equipment, and support for six locations, while the Navy’s arrangement includes support for only three locations. Only a few of programs we reviewed were able to provide some indicators of reduced weapon system support costs that could be attributed to the use of a PBL arrangement. As mentioned earlier, some programs did not have a business case analysis demonstrating how current support costs compared to other support approaches. Of the nine PBL arrangements that had been implemented and had a business case analysis that looked at alternative support options, only four could be compared with PBL contract costs. Based on this comparison, three of these four PBL arrangements indicate potential savings from PBL implementation, while the fourth is more expensive than estimated in the business case analysis. The remaining analyses could not be compared to actual program costs after PBL implementation because of programmatic changes that occurred after the analyses were conducted. The 2005 business case analysis for the Army’s Patriot estimated a 3-year cost savings of $1.6 million from using a PBL arrangement to provide 107 high-demand parts. According to a program official, the contract is in its final year and total obligations are expected to be about $1 million less than estimated in the business case analysis. Additionally, two business case analyses prepared by the Naval Inventory Control Point estimated that supply chain management support costs were reduced by implementing a PBL arrangement. The business case analyses projected cost savings of about $2.2 million for the 5-year Close-In Weapon System PBL arrangement awarded in 2006 and $1.3 million for the 5-year Harrier PBL arrangement awarded in 2007. Based on actual contract costs—and if the contracts are not modified in the future—the total savings for these programs are projected to be $5.2 million and $5.8 million, respectively. Although the F/A-18 E/F business case analysis estimated a 5-year supply chain management savings of approximately $1.4 million, the actual contract cost is about $1.6 million more than the estimated amount in the analysis. Given the difference, the PBL arrangement has not reduced support costs for the program. As previously noted, two of the PBL arrangements having evidence of reduced support costs are managed by the Naval Inventory Control Point. This activity has used PBL arrangements since fiscal year 2000 and has reported achieving cost reductions as a result, using the Navy working capital fund to issue longer- term, multiyear contracts that can extend up to 5 years in length to support aircraft or ship subsystems or components. According to agency officials, these longer-term agreements have enabled the Naval Inventory Control Point to guarantee the contractors a more stable business base, which provides contractors an incentive to make investments to reduce costs. Overall, as a result of using PBL arrangements, Naval Inventory Control Point officials estimate that they have reduced support costs by approximately $26.7 million and $63.8 million—or 2.8 and 5.8 percent—in fiscal years 2006 and 2007, respectively. Although the V-22 program conducted a business case analysis in 1998 to estimate alternative costs for supporting the engines and projected savings of $249.5 million over the 53-year life cycle of the program, the analysis did not take into account the time value of money and calculate savings based on net present value. For this and other reasons, we cannot validate that the savings are reasonable. In addition to DOD’s economic analysis instruction, guidance from the Office of Management and Budget also states that net present value is the standard criterion for deciding whether a government program can be justified on economic principles. In 2004, another analysis was prepared for the V-22 engine program to determine (1) if several assumptions used in the 1998 analysis were still valid and (2) the impact of any changes to those assumptions on the cost savings estimate for the PBL arrangement. The later analysis concluded that differences in three of the original assumptions increased the projected PBL cost savings to $305.9 million—an increase of $56.4 million. The updated savings again were not calculated using net present value. Moreover, although limited actual data were available, the calculations generally made adjustments using assumptions that generated the maximum potential savings for the PBL alternative. For example, when adjusting the 1998 analysis to account for differences in the costs experienced for excluded repairs (repairs that were not covered by the PBL arrangement), the total potential PBL cost savings were increased by $48 million because the average excluded repair cost was lower than previously estimated. However, even though data showed that excluded repairs occurred at a higher frequency than projected in the original analysis, the later analysis did not adjust for the higher frequency of excluded repairs. Thus, the savings calculation is questionable, because the analysis noted that the frequency of these repairs could eliminate all of the estimated cost savings. Finally, the 10-year-old analysis has not been completely updated to estimate costs based on actual data. The remainder of the analyses could not be compared to current PBL arrangement costs because of programmatic changes that occurred after the analyses were conducted. For example: According to an Air Force C-130J program official involved in the development of the 2004 business case analysis, the analysis was conducted while the aircraft was supported by a commercial contract; therefore, the program office did not have detailed cost data on which to base the estimate. The estimate was developed, in part, using cost data from other legacy programs and other assumptions that program officials said did not turn out to be accurate. Thus, though the business case analysis helped program officials develop the program’s support strategy, the cost estimates contained within are not useful for monitoring current program costs. The 2002 business case analysis for the Army’s Sentinel PBL arrangement estimated costs for a 26-year period beginning in 2003 using a fleet size ranging from 126 to 198 radars. According to program officials, since 2003 the fleet size has ranged from 140 to 143 radars and additional radars are not anticipated. Although a new business case analysis was prepared, it had not completed the Army’s review and approval process at the time of our review. Few of the remaining programs in our sample could document cost reductions attributable to the use of a PBL arrangement after negotiating a follow-on fixed-price contract. The PBL concept envisions that support providers are incentivized to improve reliability to ensure that performance metrics are met and reduce their costs to provide support to increase profits—especially under fixed-price arrangements. To the extent practicable, we examined how contract costs changed for eight programs in our sample that negotiated follow-on contracts or priced previously unpriced contract options after completing fixed-price PBL contracts. According to officials, a variety of factors affected the support costs negotiated in the PBL contracts that caused both costs increases and decreases. Only one program had decreasing support costs that program officials attributed to the use of a PBL arrangement. One additional program supported under a cost-plus-award-fee contract also reduced support costs by changing the metrics included in the contract. However, these two programs did not have updated business case analyses that analyzed alternative support options over the remaining life cycle of the program. Finally, only one program office had developed a methodology for tracking and verifying reliability improvements made under the PBL arrangement, although this is necessary for quantifying the related cost savings. Support costs for the Navy’s Consolidated Automated Support System have decreased over the 8-year PBL arrangement that began in April 2000. Program officials attribute the cost reductions the program has experienced to the implementation of a PBL arrangement. Depending on the level of support chosen, the fixed price charged for the annual support of a test station decreased from 53 to 20 percent (constant 2008 dollars) from 2000 through 2008. Program officials said that they closely monitored maintenance data and failure rates in order to negotiate lower fixed prices where possible. In addition, officials said that they were able to increase the number of repair and consumable parts covered under the arrangement over the years. According to officials, prior to the implementation of the PBL strategy support costs for the program were even higher, but officials were unable to locate the contracts. Although support costs for a few of the other seven programs decreased, officials said that there were a number of other factors that influenced costs, such as changes in the scope of work or planned usage of the systems. For example, according to Tube-launched Optically-tracked Wire- guided missile – Improved Target Acquisition System program officials, a variety of factors affected the costs negotiated in the 2007 contract, and increased fleet size was one factor that allowed them to negotiate lower rates per system. In addition, when the first fixed-price PBL arrangement was implemented in 2001 the program was fairly new with very few systems, so the program office did not have an extensive amount of historical program data with which to negotiate. Since 2001, the program office has collected actual data that it used to negotiate lower rates in the latest contract. However, according to program officials, the contractor only recently started making changes to the system to improve reliability. These improvements were not included in negotiations for the 2007-2011 contract but have begun to improve failure rates and are expected to reduce costs in future contracts. Although the Army’s Shadow Tactical Unmanned Aircraft System is not supported by a firm-fixed-price PBL contract, program officials for the system said that they were able to reduce support costs by changing the performance metrics used in the PBL arrangement. The maximum amounts authorized in the annual cost-reimbursable PBL contract for the support of this system were reduced by 28 percent from fiscal years 2006 through 2007. According to program officials, a program office review of PBL processes in early fiscal year 2006 concluded that while the PBL arrangement was effective in terms of meeting the performance levels specified in the contract, it was not cost efficient and costs associated with the vehicle’s high accident rate were an area of particular concern. In response, the program office changed the performance metrics in the contract to encourage the contractor to improve reliability and reduce the accident rate, and also to improve depot maintenance efficiency. As the accident rate improved, the program office was able to negotiate for lower support costs in the 2007 PBL contract. Finally, while the 2005 life cycle business case analysis for the F/A-18 E/F program office estimated that support provided under a PBL arrangement would be more expensive than government-provided support, program officials for the Navy’s F/A-18 E/F PBL arrangement have developed a process to track and document support cost reductions attributed to contractor investments to improve reliability and reduce costs. Program officials said that both the Navy and Boeing have funded initiatives to improve F/A-18E/F component reliability, maintainability, and supportability as part of the Supportability Cost Reduction Initiatives program. Under the current fixed-price PBL arrangement, Boeing has invested approximately $11.39 million to fund initiatives that officials currently estimate will generate cost reductions of approximately $279 million over the remaining life cycle of the system. According to program officials, Naval Air Systems Command cost analysts have validated baseline estimates and will annually track the results of the initiatives in order to quantify actual support cost reductions attributed to the investments in the future. According to program officials, eight of the PBL arrangements within our sample of 29 systems generally achieved a level of performance that significantly exceeded what is required under the contracts. According to the DOD/Defense Acquisition University PBL guide, PBL arrangements should be structured to meet the needs of the warfighter. Therefore, if actual performance exceeds what is called for in the PBL arrangement, it also exceeds the level of performance that is needed. According to program officials, for eight of the PBL arrangements we reviewed, the contractors significantly exceeded some of the contractual performance requirements. For example: Since 2002, Army officials said that the average annual operational readiness for the Tube-launched, Optically-tracked, Wire-guided missile – Improved Target Acquisition System has not been below 99 percent, and the system’s operational readiness has averaged 100 percent since 2004. According to a program official, the Army’s readiness standard for this system is 90 percent. Despite the Army’s standard, it continued to include a performance incentive that encouraged higher levels of performance when negotiating a follow-on PBL contract in 2007. The performance incentive includes payment of an award fee that encourages operational readiness rates from 91 to 100 percent, with the highest award fee paid for 100 percent average operational readiness. According to officials, since early 2005, monthly readiness rates for the Army’s Javelin have generally been measured above 98 percent. However, the PBL contract for support of this system only requires 90 percent mission readiness. Although the contractual requirement for parts availability for the Navy’s V-22 engine PBL arrangement has been 90 percent since 1998, according to program officials, actual parts availability has consistently averaged 98 to 100 percent. Similarly, with availability averaging 98 percent since 2004, Air Force program officials for the LITENING Advanced Airborne Targeting and Navigation Pod said that the contractor has consistently exceeded the contract requirement for 92 percent system availability. For programs where performance significantly exceeded contractual requirements, it is unclear how much extra was paid to get the added performance. Since the government is paying for this excess performance, then the arrangement, as structured, may not provide the best value to the government, particularly since there are other DOD programs that are not funded at levels that would be required to achieve their desired level of readiness. Several characteristics of DOD’s PBL arrangements may limit their potential to reduce costs. First, DOD’s PBL contracts are limited to relatively short time periods, while proponents of the PBL concept believe that longer-term PBL arrangements are necessary to encourage support providers to make investments to improve reliability. Second, in DOD— where changing requirements and priorities can result in fluctuations in the funding for support of DOD’s weapon systems—creating a stable level of funding is challenging. Third, many PBL arrangements only transfer responsibility for inventory management to the contractor and do not transfer inventory ownership, which reduces incentives for ensuring a correctly sized inventory level. Finally, many of DOD’s PBL arrangements do not contain cost metrics or offer specific incentives to encourage cost reduction initiatives. According to program officials, DOD support contracts, including PBL contracts, that are funded by operation and maintenance funds are generally limited to 1 year, and working-capital-funded contracts are generally limited to 5 years, with subsequent option years allowed up to a total of 10 years. However, according to the DOD/Defense Acquisition University PBL guide, longer-term PBL contracts are preferred because a key aspect of PBL is the provision of incentives for contractors to reduce costs over time through increased reliability while still making a profit. Further, contract length should be sufficient to allow for an adequate return on any investments made to improve reliability. Officials from several PBL arrangements cited instances in which reliability improvements were needed but contractors were hesitant to make investments while under annual support contracts. For example, Joint Primary Air Training System program officials said that during the original 10-year PBL arrangement that began in 1996, the contractor did not make any investments to improve unreliable components. Although officials were expecting the fixed-price performance contract to motivate the contractor to invest in improvements to increase reliability and maximize profit, they found that the contractor minimized its own costs during the contract period and passed on the costs to improve the reliability of components with high failures to the government when the contract was renegotiated. Our prior work found that the private sector sometimes used PBL contracts of 10 to 12 years. Stable requirements and funding, like longer-term contracts, could enable contractors to make reliability improvements and other business decisions, such as long-term supplier arrangements, that could improve performance and reduce future support costs because they have reasonable assurance of future needs. For example, officials representing one of the PBL arrangements we reviewed credited stable funding for much of the program’s cost savings. The F-117 program office estimated that its arrangement would have cost over $80 million more if the Air Force had not agreed to stabilize the program’s support budget and provide the necessary support funding each year of the contract. However, DOD’s requirements and priorities, and related funding, for weapon system support are not always stable. For example, according to Army officials, the Tactical Airspace Integration System’s PBL arrangement was affected by a significant reduction of the program’s support budget. The Army subsequently requested that the Defense Acquisition University study the implications of funding on PBL arrangements and prepare a case study based on this example. In addition, for the last several years some of the Army’s PBL arrangements we reviewed did not receive all of their support funds at the beginning of the fiscal year but rather in increments throughout the year. Program officials for one Army system said that at one point during fiscal year 2005, they almost had to discontinue some of the support provided under the contract because they did not have adequate support funds. Additional funding was eventually made available after the program office notified its major command of the situation. Army program officials said that this funding instability further exacerbates the impact of having short-term contracts, since all of the funds are not available to the contractor to make business arrangements or investments for reliability improvements. Many of the PBL arrangements we reviewed only transferred responsibility for inventory management, not ownership, to the contractor. An analysis by Sang-Hyun Kim, Morris A. Cohen, and Serguei Netessine of the Wharton School, University of Pennsylvania, suggests that the efficiency of a PBL arrangement depends heavily on the asset ownership structure: with higher ownership responsibility, the supplier is more likely to spend more to increase reliability and less for maintaining inventory levels. According to this study, under an arrangement in which the contractor owns and manages the inventory, reliability improvements and inventory costs will both be evaluated in terms of their ability to meet performance metrics and minimize costs. If the PBL arrangement only includes inventory management, higher inventory levels may be used, instead of investments to improve reliability, to meet performance metrics—particularly those that measure availability—since inventory holding costs are not incurred by the contractor. Consequently, under DOD’s PBL arrangements, contractors may choose to make fewer reliability improvements. Finally, many of DOD’s PBL arrangements do not contain cost metrics or offer specific incentives to encourage reduced costs. According to an August 2004 memorandum from the Under Secretary of Defense (Acquisition, Technology and Logistics) regarding performance-based criteria, PBL should be constructed to purchase performance, which is defined in terms of operational availability, operational reliability, cost per unit usage, logistics footprint, and logistics response time. The guidance recommended that PBL metrics be tailored to reflect the unique circumstance of the arrangement, but still support desired outcomes in terms of the five performance criteria. A subsequent 2005 memorandum from the Under Secretary of Defense (Acquisition, Technology and Logistics) directed the use of these metrics as the standard set of metrics for evaluating overall total life cycle systems management. Some of the aviation PBL arrangements we reviewed negotiated their support on a cost per flight hour basis. For those that did not, cost per flight hour was generally not included as part of the contract performance plan, with the exception of the F/A-18 E/F PBL arrangement. For example, the C-17 program office did not negotiate its contract on a per flight hour basis and does not monitor cost per flight hour as part of its PBL arrangement. None of the nonaviation PBL arrangements we reviewed included cost metrics as part of the PBL arrangement. In addition, only four of the PBL arrangements we reviewed contained incentives for reducing or controlling costs. For example, the F-117 and Shadow Tactical Unmanned Aircraft System PBL arrangements each included a cost- sharing provision where the government and the contractor would share annual savings if actual costs were below negotiated costs. Further, officials said that the award plan for the F-22 PBL arrangement also will consider how actual costs compare to negotiated costs when calculating the amount of award fee the contractor earns at the end of the year. Although PBL arrangements were included in a DOD pilot program intended to demonstrate the ability of various initiatives to reduce support costs, DOD did not emphasize this goal in its guidance or requirements as it established the concept as the department’s preferred weapon system support strategy. In general, improved performance was given greater emphasis, and we found only a few references to cost reduction in DOD’s guidance on implementing PBL arrangements. With respect to requirements for cost reporting, DOD and the services do not require that programs using PBL arrangements, or other contractor logistics support arrangements, collect and report detailed cost data in a consistent, standardized format. Since 2001 DOD’s guidance regarding PBL has emphasized higher levels of readiness and stressed rapid implementation. For example, in 2001, when DOD cited PBL as the preferred weapon system support strategy, PBL was described as a strategy for achieving a higher level of system readiness through efficient management and direct accountability. In a 2002 Under Secretary of Defense (Acquisition, Technology and Logistics) memorandum, the services were instructed to prepare PBL implementation plans that aggressively pursue the earliest feasible program implementation end dates. A January 2004 Under Secretary of Defense (Acquisition, Technology and Logistics) memorandum stated that PBL was the department’s near-term strategy to increase weapon system readiness through integrated logistics chains and public/private partnerships. The memorandum contained guidance to implement PBL where economically feasible and provided guiding principles for a best- value assessment. The following month a Deputy Secretary of Defense memorandum again directed the services to provide plans for aggressively implementing PBL arrangements. In contrast to DOD’s clearly stated goal to reduce support costs in the late 1990s, we found few references to the potential for PBL to reduce support costs since 2001. DOD guidance generally only indirectly refers to potential PBL cost reductions to “compress the supply chain” and “reduce non-value added steps.” In May 2003, DOD Directive 5000.1, The Defense Acquisition System, was updated to emphasize that program managers shall implement PBL strategies “that optimize total system availability while minimizing cost and logistics footprint.” In March 2004, an Under Secretary of Defense (Acquisition, Technology and Logistics) memorandum reiterated that PBL was the preferred strategy and provided criteria on which to assess potential for PBL application. One of the criteria stated that the cost per operational unit of performance (such as a flying hour) should be capable of being reduced through PBL implementation. Finally, in 2005, the DOD/Defense Acquisition University PBL guide contained several references to the potential for PBL to improve reliability and reduce costs. Program offices often lacked detailed and standardized weapon system support cost data because DOD has not required them to obtain and report cost data from the contractors that provide such support, including those involved in PBL arrangements. According to the OSD Office of Program Analysis and Evaluation, historical operating and support costs, organized in a standard format, are necessary for preparation of life cycle cost estimates for new systems, budget formulation, analysis of working capital funds, development of business case analyses, and future contract negotiations. Until 2007, DOD’s guidance for structure of support cost estimates, which is also suggested as a defined presentation format for historical operating and support costs, included all contractor support— labor, materials, overhead, and other assets—in one category, while government-provided support was reported in greater detail among multiple categories and lower-level subcategories. Therefore, amounts paid for contractor support were generally reported in the aggregate. In October 2007, DOD changed its guidance to include a more detailed presentation of contractor support costs in the various categories, similar to the reporting of government support costs. However, neither DOD nor the services have required program offices to obtain or report contractor support costs, including PBL arrangements with contractors, in this format. OSD and service officials are beginning to recognize the need for further visibility of the costs of support provided by contractors. In late 2006, OSD’s Office of Program Analysis and Evaluation began a study regarding the collection of contractor support costs because the department acknowledged that visibility into these costs in DOD’s systems was generally limited. Many of the programs studied were PBL arrangements also included in our sample. OSD’s study also found that program offices often did not have detailed cost data and, if cost data were provided, the data often did not conform to, or could not be converted to, the standard support cost structure. Based on the study results, OSD is considering requiring contractors to report their actual costs for providing logistics support, including profit and general and administrative expenses, in DOD’s standard cost structure. However, the details of the requirement and which programs will be subject to such reporting have not been finalized. Similarly, Air Force officials have also recognized the limitations on visibility into contractor support costs for weapon systems. The Air Force is currently considering expanding visibility by requiring that all contractor-supported programs report actual obligations for contractor labor and materials (including PBL arrangements) in each of DOD’s cost structure categories for each aircraft mission design series. According to Air Force Cost Analysis Agency officials, this requirement is different from the one being considered by OSD in that the Air Force will have visibility over the Air Force’s costs for contractor support but not the contractor’s actual costs. The United Kingdom’s Ministry of Defence also uses performance-based arrangements to support its weapon systems. Ministry of Defence officials refer to this initiative as contracting for availability. Similar to DOD, when using availability contracts the Ministry of Defence pays industry for aircraft, engines, or components to be available for military operations, rather than paying for specific repairs, spares, and technical support. According to officials, the use of contracting for availability also started as an approach for reducing costs for weapon system support. Ministry of Defence officials said that their current contracts for availability generally provide support for aviation systems, such as helicopters and combat aircraft. Although there are maritime availability contracts, they said that most of the ministry’s maritime availability contracts support specific types of equipment rather than entire ships. In general, the availability contracts used by the ministry are significantly longer than those used by DOD, and the ministry uses an “open book accounting” arrangement to gain visibility into the contractors’ costs to provide support. According to officials, the annual budget for the Defence Equipment and Support organization is approximately £13 billion, including funds for conflict operations. In 1999, the United Kingdom’s Defence Logistics Operation, one of two entities that merged into the current Defence Equipment and Support organization, established a goal to reduce costs 20 percent by 2005/2006. According to Ministry of Defence officials, contracting for availability began during this period as a way to maintain or improve performance while assisting in achieving cost reductions. They believe that if industry is paid for a given level of availability, there are incentives to reduce support chain costs and make the weapon system more reliable and processes more efficient. The cost reduction goal was a key driver in the transformation of the maintenance, repair, and overhaul activity for Harrier and Tornado fast jet aircraft. A member of the Tornado Integrated Project Team stated that a number of factors drove the support strategy change for the Tornado aircraft, but the primary factor was the need to reduce costs to match budget reductions; the team identified availability contracting as an effective way to reduce costs and maintain performance. Officials also stated that the support strategies for all of the ministry’s helicopters were changed because of increased budget pressures. In 2007, the United Kingdom’s National Audit Office reported that the Ministry of Defence has experienced significant reductions in the costs to support its fast jets; the Tornado and Harrier costs have been reduced from a total of £711 million in 2001through 2002 to £328 million in 2006 through 2007, providing a cumulative saving of some £1.4 billion over the 6-year period. The National Audit Office reported that the savings were achieved by working with industry to reform traditional contracts into availability contracts. However, the report also stated that the ministry did not have sufficient data to assess the impact of changes in the pattern of frontline operations and productivity increases from the use of lean techniques on total costs. National Audit Office officials with whom we met confirmed that while they could validate overall cost reductions, they could not attribute the entire savings solely to the use of availability contracts. Other related initiatives, such as the reorganization and reduction of locations for aircraft repair and upgrade, the use of lean techniques, and the use of reliability-centered maintenance, also contributed to the support cost reductions. Ministry of Defence officials said that they do not require the use of availability contracts or promote their use as the preferred strategy. According to officials, the support strategy can and should vary from system to system depending on the circumstances; in some cases, it may be appropriate for government activities to support some systems in the traditional manner and for others to use contracting for availability. To assist with the decision-making process, the Defence Equipment and Support organization developed a “support options matrix” for use in reviewing current and future support arrangements. Officials said that the matrix was developed to assist with analyzing components of support for cost and performance drivers, illustrating a range of support options differentiated by the gradual transfer of cost and performance drivers into industry management and presenting a clear rationale for each support chain design in terms of the benefit to be derived from the transfer of specific cost and performance drivers into industry management. In addition to the matrix, a contractor capability assessment is also completed to determine the ability of industry to assume greater management responsibility. Finally, according to officials, before they enter into a contract for availability, two additional analyses are conducted. The first is an investment appraisal, or an “internal value benchmark,” which calculates the lowest cost at which the service could be provided by the government. The second is a business case analysis, which discusses the different proposals and justifies the selection of the proposed approach. Officials noted that the proposed approach does not have to be the lowest-cost option, but is usually the option that offers the best value solution overall. In its 2007 report, the National Audit Office indicated that internal value benchmarks were not developed consistently and recommended development of improved guidance and consistent application of a common methodology for benchmarks against which to assess the value of proposed availability contracts. National Audit Office officials said that they found variance in the quality of these cost estimates and a shortage of qualified people for cost modeling. They also pointed out that as less and less support is provided by the government, accurate cost modeling for use when renegotiating contracts will become more important, and the Ministry of Defence needs to maintain or improve visibility of support costs for its weapon systems. Defence Equipment and Support officials said that they have found the long-term nature of availability contracts a key factor in reducing costs and that annual contracts cannot achieve the same benefits as the longer- term contracts do. According to officials, the long-term contracts for Tornado aircraft and helicopter fleets reduced costs because the contractors were able to stabilize their supply chain and obtain better prices from the supplier base. The Ministry of Defence also found that industry preferred long-term contracts. In a discussion of contracting for availability, the “Defence Industrial Strategy,” a white paper dated December 2005, stated that companies are generally interested in using availability contracts because it provides the commercial firms with greater returns over a longer period. Ministry of Defence officials provided us with the following examples of their long-term availability contracts: The Ministry of Defence has a 10-year contract with AgustaWestland to support the Sea King helicopter until it is projected to be removed from service. The Ministry of Defence has priced the contract for the first 5 years, and thereafter it will establish the price in 5-year increments. The Ministry of Defence has a 23-year contract with VT Group to support two survey ships owned by the ministry. The contract has price renegotiation points at the 7-, 15-, and 20-year points. The Ministry of Defence has a 19-year contract with BAE to support the fleet of Tornado aircraft. The ministry awarded the contract in December 2006 and priced it for the first 10 years. The Ministry of Defence has a 25-year contract with AgustaWestland to support the Merlin helicopter until it is projected to be removed from service. The price for the initial 5-year period of the contract is fixed, and the ministry is currently negotiating prices for the next 5-year period of performance that begins in 2011. Ministry of Defence officials said that other factors, such as inventory ownership, contract incentives, and cost visibility, were also important when contracting for availability. Officials told us that they preferred to transfer not only management of inventory but also inventory ownership under such arrangements. They noted that under some of their current availability contracts this had not been possible for a variety of reasons. Nonetheless, in the future they intend to pursue transfer of inventory ownership as much as possible. Examples of Ministry of Defence availability contracts where officials said that inventory is owned by industry, also known as spares inclusive, include a contract for support of two survey ships. In addition, according to ministry officials, several of the availability contracts—including those supporting the Sea King and Merlin helicopters and Tornado fast jets—had incentives referred to as gain share or pain share. In these types of arrangements, the contractor and government share cost savings or cost overruns in prenegotiated proportions. According to officials, they found that these types of metrics are useful to influence contractor cost control measures and provide an incentive for industry to develop changes and modifications that reduce support costs. Officials familiar with the Tornado fast jet availability contract explained that their arrangement included gain sharing and pain sharing on both the variable and fixed-price portions of the contract. Finally, officials explained that in many of the Ministry of Defence’s availability contracts, the concept of open book accounting is employed. Open book accounting is not a defined term but is more of a general expression describing a level of access to accounting data that would not normally be available under a conventional contract. In availability contracts, open book accounting allows government program officials to review the accounting records of the contractor. This access is not without limits. Officials said that the level of access must be agreed to in advance on a case-by-case basis and reflects the circumstances of the arrangement and the need for access to certain data to monitor performance or benefits arising from the arrangement. For example, one contract may only provide for man-hour data because that is all that needs to be shared given the circumstances. However, another contract may allow access to direct cost, direct labor hours, and other rates and factors that are relevant for the work involved. According to officials, the Ministry of Defence has an open book accounting agreement with AgustaWestland for the Merlin contract and the government has full visibility of the accounts pertaining to Merlin, including overhead costs. The contract must explicitly address the data access arrangements and not rely on vague and undefined phrases that could be open to misinterpretation. According to the 2007 National Audit Office report, long-term availability contracts may limit flexibility to respond to changes in resources. In the past, integrated project team leaders in the Ministry of Defence had some ability to move funding between resource lines to overcome short-term funding issues. However, this flexibility is diminishing because of the transition to availability contracts, as larger portions of the budget are pre- allocated to fund these contracts. The Mine Warfare Patrol and Hydrographic Team also raised concerns about loss of budget flexibility. This team is responsible for providing support for 2 hydrographic ships, 1 patrol ship (HMS Clyde), 3 River class ships, 16 mine hunters, and 38 smaller ships. The budget for providing support to these ships is approximately £40 million, with £18 million devoted to the long-term availability contracts for the 2 survey ships, 1 patrol ship, and 3 River class patrol ships. According to Ministry of Defence officials, these arrangements have for the most part been beneficial. However, as they are structured, these programs do not allow for any flexibility. When the Mine Warfare Patrol and Hydrographic Team recently had to absorb a 20 percent budget cut, officials said that the mine hunter ships bore the brunt of the cut because they had the majority of the remaining support budget not earmarked for an availability contract. The team views the 20 percent cut to its budget to be, effectively, a 40 percent cut to the mine hunter ship budget. Defence Equipment and Support organization officials said that they are looking to add more flexibility to future availability contracts. The Ministry of Defence has already incorporated some flexibility in a few availability contracts. Officials said that the Tornado contract contains both fixed-price elements for management team, training, logistics, and information systems and a variable price element for flying hours. Given this, the contract is fairly flexible and payment is based on certain flying hour availability bands—with the bands ranging from 70 to 110 percent availability in 10 percent increments that are agreed to annually. As another example, officials explained that the Merlin contract provides flexibility in that the prenegotiated price is linked to banded flying hours with fixed and variable elements. Under traditional contracting, they estimate that only 20 percent of the cost would vary with flying hours. Also, within the basic contract parameters there is a provision for surge delivery for the Merlin helicopter. Finally, according to officials, the Sea King helicopter support contract has a similar flexibility because there are a number of flying hour bands and each band has its own price. In this manner, the Ministry of Defence can increase or decrease flying hours without renegotiating the contract. Officials pointed out that one drawback is that the price charged per flying hour at the lower bands is higher because the contractor must be able to cover fixed costs with fewer flight hours to charge. However, they said that the cost per flying hour is still far less than it would have been under a more flexible traditional arrangement. While the concept of using PBL support arrangements was intended to be a cost reduction strategy as well as one that would result in improved performance, DOD’s emphasis has been more focused on performance and less focused on cost. DOD no longer emphasizes reducing costs as a goal for PBL programs, and DOD’s implementation of PBL, in its current form, does not ensure that its PBL arrangements are cost effective. DOD’s emphasis on the implementation of PBL as the preferred weapon system support strategy has deemphasized the importance of the development of consistent, comprehensive, and sound business case analyses to influence decisions regarding the use of a PBL arrangement. Although DOD’s guidance recommends using business case analyses to guide decisions about using PBL arrangements for weapon system support, the DOD guidance does not require these analyses and almost half of the programs we reviewed either did not perform a business case analysis or did not retain documentation of their analysis. Further, the quality of the analyses of those programs that had performed a business case analysis varied considerably since many were missing elements of what DOD guidance recommends for sound economic analyses. Additionally, most of those analyses that should have been updated had not been. Thus, DOD lacks a sound approach for analyzing whether proposed PBL arrangements are the most cost-effective strategy for supporting weapon systems. Without instituting a more consistent, comprehensive, and sound process on which to base decisions regarding the type of arrangement to be used in supporting DOD systems, it is unlikely that the department will be successful in achieving the sizable savings that were envisioned when the PBL concept was adopted. Assessing the cost-effectiveness of PBL programs also requires the availability of better cost data at a level of detail that would support the improved management of ongoing PBL programs, including awarding contract fees, assessing performance versus the cost to achieve it, evaluating historical costs to determine whether the status quo should be maintained over time, and making support decisions about future follow- on programs. Such data are usually not available for PBL programs, limiting the ability of program offices to make program adjustments or take restructuring actions when appropriate. Nonetheless, a few program offices have acquired data at this level and indicate that they obtained them in a cost-effective manner. Improved access to detailed cost data is another essential element in improving the quality of data available to DOD decision makers regarding the cost-effectiveness of PBL arrangements. To ensure that PBL arrangements are the most cost-effective option for weapon system support, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to take the following five actions: revise DOD’s Acquisition Directive to require development of a business case analysis to support the decision-making process regarding weapon system support alternatives, including PBL; revise PBL business case analysis guidance to more clearly define what should be included in a business case analysis and to establish specific criteria and methods for evaluating PBL support arrangements, including evaluation at the subsystem and component levels; revise PBL business case analysis guidance to more clearly define when business case analyses should be updated during the weapon system life cycle; require that each service revise guidance to implement internal controls to ensure that program offices prepare and update business case analyses that are comprehensive and sound; and require program offices to collect and report cost data for PBL arrangements in a consistent, standardized format with sufficient detail to support traditional cost analysis and effective program management. In written comments to a draft of this report (see app. II), DOD generally concurred with our five recommendations, noting that the department is committed to evaluating PBL strategies using business case analyses as part of the overall supportability assessment made during the development stages of weapon system acquisition programs. Specifically, the department fully concurred with three recommendations and partially concurred with two. DOD fully concurred with our first recommendation to revise DOD’s acquisition directive to require the development of a business case analysis to support the decision-making process regarding weapon system support alternatives, including PBL. DOD stated that the department will take steps to address this issue in the next iteration of the DOD Directive 5000.1 and DOD Instruction 5000.2 acquisition regulations. According to DOD’s response, this new policy will require that the use of a business case analysis be mandatory and that this analysis serve as a sound basis for the selected supportability strategy. In response to our second recommendation to revise PBL business case analysis guidance to clearly define what should be included in a business case analysis and to establish specific criteria and methods for evaluating PBL support arrangements, DOD partially concurred, stating that it established a Life Cycle Product Support Assessment Team in September 2008 to study product support policy, guidance, past performance, and results. As part of the study, existing business case analysis policy is being reviewed, and the department will evaluate the team’s recommendations on providing specific criteria and methods for evaluating support arrangements and determine how best to incorporate these recommendations into mandatory policy. The team’s initial recommendations are expected in April 2009. DOD fully concurred with our third recommendation to revise PBL business case analysis guidance to more clearly define when, during the weapon system life cycle, business case analyses should be updated. According to DOD’s response, the department’s Life Cycle Product Support Assessment Team will evaluate the appropriate timing of initial business case analyses and follow-on updates to validate the life cycle support approach for weapon systems, and the team’s recommendations will be evaluated for inclusion into mandatory policy. DOD fully concurred with our fourth recommendation to require that each service revise guidance to implement internal controls to ensure that program offices prepare and update business case analyses that are comprehensive and sound. As we noted in our report, the Army has already implemented a PBL business case analysis review and approval process. DOD stated that the Army’s internal controls will be reviewed by the Life Cycle Product Support Assessment Team, which will make recommendations for expansion for DOD-wide governance policy as part of the team’s overall recommendations expected in April 2009. DOD partially concurred with our fifth recommendation to require program offices to collect and report support cost data for PBL arrangements in a consistent, standardized format with sufficient detail to support traditional cost analysis and effective program management. DOD stated that a provision for tailored cost reporting for major acquisition programs designed to facilitate future cost estimating and price analysis has been included in the draft DOD Instruction 5000.2, which is expected to be approved in the next 30 days. Additionally, the Life Cycle Product Support Assessment Team is reviewing support cost reporting and cost analysis as a part of its ongoing study. According to DOD’s response, the ultimate goal is standardized support cost reporting for all life cycle product support efforts, to include support provided by government activities. While concurring with our recommendations, DOD’s response noted that the department disagrees with the assertion that the goal of PBL arrangements is to reduce costs. Rather, the primary goal of PBL arrangements is to increase readiness and availability while reducing overall sustainment costs in the long run. Our report recognized that the current DOD Directive 5000.1 provides that PBL arrangements shall optimize total system availability. However, our report notes that this directive also provides that PBL arrangements shall minimize costs and the logistics footprint. Moreover, our report stated that PBL emerged from a 1999 DOD study to test logistics reengineering concepts that placed greater reliance on the private sector for providing weapon system support to both reduce support costs and improve weapon system performance. Thus, reducing costs was a central focus of the adoption of PBL as DOD’s preferred support strategy. Based on our analysis in this report, we continue to believe that the PBL support arrangement concept was intended to be a cost reduction strategy as well as a strategy that would result in improved performance. DOD’s response also noted that 22 of the 29 programs we reviewed produced business case analyses that enabled sound support strategy determinations. DOD further stated that for 28 of the 29 programs, the PBL strategies produced performance benefits, readiness benefits, or both, and 15 of the programs reflect cost-neutral or savings benefits resulting from the application of the PBL strategies. However, based on our analysis in this report, we continue to believe that only 20, rather than 22, of the programs had business case analyses that evaluated PBL strategies. Further, as we stated in our report, 6 of these did not retain some or all of the documentation and 13 were missing elements of DOD’s criteria for economic analyses. For example, we found that for one analysis the less costly option would have changed if the department had calculated the net present value of the two options considered. Additionally, because the department did not document all the potential support options in the business case analyses, it is not possible to determine if the most cost- effective options were chosen. Thus we continue to question the extent to which these analyses enabled sound support strategy determination. Finally, while we recognize that the PBL arrangements may have produced performance benefits, readiness benefits, or both, deficiencies in updated business case analyses and detailed cost data did not support an assessment of support costs. Therefore, it is unclear how many of the programs may have actually had cost-neutral or savings benefits resulting from PBL strategies. We continue to believe that improvements in collection and reporting of support cost data and the updating of business case analyses are essential if DOD is to determine the cost-effectiveness of its PBL arrangements. We are sending copies of this report to interested congressional committees and the Secretary of Defense. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are listed in appendix III. To evaluate (1) the extent to which the Department of Defense (DOD) used business case analyses to guide decisions regarding performance based logistics (PBL) arrangements and (2) the impact PBL arrangements have had on weapon system support costs, we selected a nonprobability sample of 29 PBL arrangements for weapon system support initiated from 1996 through 2007. The 29 PBL arrangements were selected from lists of weapon systems supported by PBL arrangements provided by service officials. With the exception of the Navy’s, we found that the lists provided by the services either were not current or contained inaccuracies, and the content of the lists changed significantly during the course of our review, which affected our sample selection. We chose system-, subsystem-, and component-level PBL arrangements from each of the services based on length of time since implementation, location of program office, dollar value, and prior audit findings. The 29 PBL arrangements we selected constitute a nonprobability sample and the results are not generalizable to the population of PBL arrangements. To evaluate the extent to which DOD used business case analyses to guide decisions regarding PBL arrangements, we interviewed officials regarding DOD and service requirements, policies, and guidance for business case analyses since 2001 and reviewed applicable documents. We also reviewed DOD’s 1995 economic analysis instruction, which states that analytical studies that evaluate the cost and effectiveness of weapon system support are considered to be “economic analyses,” and determined that the guidance is consistent with Office of Management and Budget guidance for benefit-cost analyses of federal programs. We interviewed program officials to discuss any business case analyses prepared to evaluate the 29 PBL arrangements before or after PBL implementation and examined the analyses using the criteria contained in DOD’s economic analysis guidance. To evaluate the impact that PBL arrangements have had on weapon system support costs, we interviewed program officials to discuss the characteristics of the PBL arrangements, including contract length, contract type, scope of work, performance measures, performance incentives or disincentives, and cost data availability. In addition, we asked program officials to identify support cost reductions that occurred as a result of PBL implementation. If a program had renewed a fixed-price PBL arrangement or had finalized contract options that were not priced, we analyzed the contracts for trends in PBL support costs. We also compared PBL contract costs to estimated PBL support costs in business case analyses, where available, to determine how closely the estimates matched the actual PBL arrangement costs. We also relied on previously issued GAO reports on DOD’s implementation of PBL. To analyze the use of availability contracts for weapon system support by the United Kingdom’s Ministry of Defence, we interviewed officials from the Defence Equipment and Support organization regarding policies or requirements for availability contracts and trends regarding the use of these arrangements. We also interviewed officials from programs identified by the Ministry of Defence as using availability contracts for weapon system support to identify the characteristics of the specific arrangements and the impact that the use of these contracts had on support costs. In addition, we interviewed National Audit Office officials who reviewed the cost and performance of two availability contracts for support of fast jets. Finally, we reviewed audit reports and other documents from the Ministry of Defence and National Audit Office. We obtained these data for informational purposes only and did not independently verify the statements or data provided by Ministry of Defence and National Audit Office officials. Specifically, in performing our work we interviewed officials and obtained documents related to PBL at the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics), the Office of the Assistant Secretary of the Navy (Research, Development and Acquisition), the Office of the Assistant Secretary of the Air Force (Acquisition), the Office of the Assistant Secretary of the Air Force (Installations, Environment and Logistics), the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology), the Marine Corps Headquarters, the U.S. Army Materiel Command, the U.S. Army Aviation and Missile Command, the U.S. Army Communications and Electronics Command, the Naval Sea Systems Command, the Naval Air Systems Command, the Naval Center for Cost Analysis, the Air Force Cost Analysis Agency, the Air Force Directorate of Economics and Business Management, the Air Force Materiel Command, the Air Force Aeronautical Systems Center, the Oklahoma City Air Logistics Center, the Warner Robins Air Logistics Center, the Ogden Air Logistics Center, the United Kingdom Ministry of Defence, and the United Kingdom National Audit Office. We conducted this performance audit from February 2007 through December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Julia Denman, Assistant Director; Harold Brumm; Matt Dove; Jennifer Echard; Chaneé Gaskin; Tom Gosling; Jennifer Jebo; Mae Jones; Kevin Keith; Charles Perdue; Janine Prybyla; and Karen Thornton made major contributions to this report.
In 2001, the Department of Defense (DOD) identified performance based logistics (PBL) as the preferred weapon system support strategy. Within DOD, PBL is the purchase of performance outcomes, such as system availability, rather than the purchase of individual elements of logistics support--such as parts, repairs, and engineering support. Although PBL initially arose from efforts to reduce support costs, questions have arisen about whether PBL has reduced support costs as originally intended. GAO was asked to evaluate the extent to which DOD has used business case analyses to guide decisions related to PBL arrangements and the impact PBL arrangements have had on weapon system support costs. In conducting the review, GAO analyzed the implementation of PBL arrangements for 29 weapon system programs. GAO also looked at the use and characteristics of performance-based contracting in the United Kingdom's Ministry of Defence. Although DOD's guidance recommends that business case analyses be used to guide decision making regarding the implementation of PBL to provide weapon system support, the services are not consistent in their use of such analyses. About half of the DOD program offices responsible for the 29 PBL arrangements GAO reviewed either did not use a business case analysis or could not provide documentation for significant parts of their analyses. Almost all of the remaining analyses were missing one or more of the recommended elements in DOD's instruction for economic analysis. Finally, business case analyses were often not updated in accordance with service policies and guidance. Program office use of these analyses is inconsistent because DOD only recommends, but does not require, that they be prepared and because DOD's guidance on preparing a business case analysis is not comprehensive and does not adequately specify the criteria to be included. Also, most of the services have not established effective internal controls to ensure that the analyses are prepared or that they provide a consistent and comprehensive assessment. As a result, DOD has implemented PBL arrangements without the benefit of sound analyses that ensure that the chosen approach will provide the most cost-effective support option. While one of DOD's goals in moving toward the use of PBL arrangements was to reduce weapon system support costs, the ability of these arrangements to reduce costs remains unclear 7 years after DOD first identified PBL as the preferred weapon system support strategy. Many DOD program offices that implemented PBL arrangements have limited cost data, and various other factors--such as the lack of business case analyses--further limit an evaluation of the costs of this support strategy. Available data from the programs GAO reviewed indicated mixed results. Although a few programs in GAO's sample provided evidence of some cost reductions, GAO's analysis of the only two systems in its sample that are managed using both a PBL arrangement and a more traditional, non-PBL arrangement indicated that in both cases the PBL arrangement had higher costs. Also, GAO found that certain characteristics of DOD's PBL arrangements--contract length, funding stability, ownership of inventory, and the lack of cost metrics and effective incentives--could limit the ability of and incentive for contractors to reduce support costs. Neither DOD nor the services require detailed cost reporting for PBL arrangements and the lack of detailed cost data hinders DOD's ability to determine whether PBL has reduced support costs as intended. GAO describes the use of performance-based arrangements for weapon system support in the United Kingdom's Ministry of Defence, which the ministry refers to as contracting for availability. The Ministry of Defence began awarding availability contracts as an approach to reduce weapon system support costs, and officials believe that support cost reductions have been achieved as a result of using availability contracts. In general, the availability contracts used are significantly longer than those used by DOD, and the ministry uses an "open book accounting" arrangement to gain visibility into the contractors' costs to provide support.
Each year, millions of visitors, foreign students, and immigrants come to the United States. Visitors may enter on a legal temporary basis—that is, with an authorized period of admission that expires on a specific date— either with temporary visas (generally for tourism, business, or work) issued by the Department of State or, in some cases, as tourists or business visitors who are allowed to enter without visas. The latter group includes Canadians and qualified visitors from 27 countries who enter under the Visa Waiver Permanent program. The large majority of these visitors depart on time, but others overstay. Our definition of an overstay in this testimony is specifically this: An overstay is a foreign visitor who is legally admitted to the United States for a specific authorized period and remains in the United States after that period expires, unless an extension or a change of status has been approved. Although overstays are sometimes referred to as visa overstays, this is technically a misnomer for two reasons. First, a visitor can overstay the authorized period of admission set by the DHS inspector at the border while still possessing a valid visa. (For example, a visitor with a 6- month multiple-entry visa from the Department of State might be issued a 6-week period of admission by the DHS inspector and remain here for 7 weeks, thus overstaying.) Second, some visitors are allowed to enter the United States without visas and to remain for specific periods of time, which they may overstay. Form I-94 is the basis of the current overstay tracking system. For visitors from most countries, the period of admission is authorized (or set) by a DHS inspector when they enter the United States legally and fill out this form. Each visitor is to give the top half to the inspector and to retain the bottom half, which should be collected on his or her departure. When visiting the United States for business or pleasure, two major groups are exempt from filling out an I-94 form: Mexicans entering the United States with a Border Crossing Card (BCC) at the Southwestern border who intend to limit their stay to less than 72 hours and not to travel beyond a set perimeter (generally, 25 miles from the border) and Canadians admitted for up to 6 months without a perimeter restriction.Thus, the majority of Canadian and Mexican visits cannot be tracked by the current system, because the visitors have not filled out Form I-94. Tracking should be possible for almost all other legal temporary visitors, including visitors from visa waiver countries, because they are required to fill out the form. Terrorists might be better prevented from legally entering the United States if consular officials and DHS inspectors used improved watch lists to screen visa applicants and make border inspections. However, some terrorists may continue to slip through these border defenses. Keeping all dangerous persons and potential terrorist-suspects from legally entering the United States is difficult because some do not match the expected characteristics of terrorists or suspicious persons; in addition, some may not be required to apply for visas (that is, citizens of Canada or one of the 27 visa waiver countries). Watch lists have been improved somewhat since 9/11, but further improvements are needed. For example, earlier this year we reported that the State Department “with the help of other agencies, almost doubled the number of names and the amount of information” in its Consular Lookout and Support System. We also reported that “the federal watch list environment has been characterized by a proliferation of [terrorist and watch list] systems, among which information sharing is occurring in some cases but not in others.” In this testimony today, we focus primarily on an overstay’s illegal presence within the United States and the potential consequences for domestic security. Viewed in terms of individuals, the overstay process can be summarized as aliens’ (1) legally visiting the United States, which for citizens of most nations is preceded by obtaining a passport and a visa and requires filling out Form I-94 at the U.S. border; (2) overstaying for a period that may range from a single day to weeks, months, or years; and, in some cases, (3) terminating their overstay status by exiting the United States or adjusting to legal permanent resident status (that is, obtaining a green card). Beyond that, the overstay process can be viewed more broadly in the context of our nation’s layered defense. For example, figure 1 illustrates many issues in this defense that we have analyzed in numerous reports—ranging from overseas tracking of terrorists to stateside security for critical infrastructure locations and aviation. Significant numbers of visitors overstay their authorized periods of admission. A recent DHS estimate put the January 2000 resident overstay population at 1/3 of 7 million illegal immigrants, or 2.3 million. The method DHS used to obtain the 1/3 figure is complex and indirect, and we plan to evaluate that estimate further. However, the 2.3 million overstay estimate excludes specific groups, and we believe, therefore, that it potentially understates the extent of overstaying. By definition, DHS’s estimate of 2.3 million overstays as of January 2000 represents only a part of the total overstay problem. DHS’s estimate of 7 million illegal immigrants is limited to illegals who settled and were residing here at the time of the 2000 census. It includes only overstays who were in the actual census count or included in corrections for possible undercounts of illegal immigrants. DHS’s estimate of overstays as of January 2000 is not defined to include the following groups: a. Visitors filling out Form I-94 who overstay for short periods of time. Many such persons are not likely to be included in the 2000 census, which is the starting point of DHS’s 2.3 million estimate of the resident overstay population. In our ongoing work, we will examine indicators of the magnitude, and significance, of short-term overstaying among visitors who fill out I-94 forms. b. Mexican and Canadian visitors not filling out Form I-94 who overstayed and settled here. Overstays in this group are included in DHS’s estimate of 7 million illegal immigrants, but they are categorized as illegal immigrants other than overstays. This is because DHS used I-94 data from the early 1990s and projected these data forward to obtain the 1/3 overstay proportion. overstay for short periods. As indicated above, many short-term overstays are not included in the 2000 census, which is the starting point of DHS’s 2.3 million estimate of the resident overstay population. These groups are illustrated in figure 2. In part because of coverage issues, the extent of overstaying has not been definitively measured. In addition, the accuracy of DHS’s estimate of the resident overstay population is not known with precision. Other limited data points may help illustrate the possible magnitude. For this testimony, we obtained two small-sample sources of data. First, we identified a government-sponsored survey, reported in 2002, that had (1) sampled more than 1,000 adult green-card holders, (2) asked them about their prior immigration status, and (3) found that more than 300 respondents self-reported prior illegal status. From the computer run we requested, we found that of the roughly 300 former illegals, about 1/3 said they were former overstays, with most of the remaining 2/3 reporting prior illegal border crossing. Second, we obtained data from Operation Tarmac, the 2001–03 sweep of airport employees who had access to sensitive areas. Although Operation Tarmac investigators had collected information on overstaying, they did not systematically record data for overstays versus illegal border crossers. We requested that DHS manually review a sample of case files and identify overstays. DHS reported to us that of 286 sampled cases in which illegal immigrant airport workers (that is, overstays and illegal border crossers) were arrested or scheduled for deportation, 124 workers, or about 40 percent, were overstays. While both the survey data and the airport data represent rough small- sample checks, they provide some additional support for concluding that overstays are not rare. One weakness in DHS’s system for tracking the paper Form I-94—its limited coverage of Mexican and Canadian visitors—was discussed in the section above. In our previous work, we have pointed to at least three other weaknesses in this tracking system: Failure to update the visitor’s authorized period of admission or immigration status. We reported earlier this year that DHS does not “consistently enter change of status data . . . integrate these data with those for entry and departure.” DHS told us that linkage to obtain updated information may occur for an individual, as when a consular official updates information on an earlier period of admission for someone seeking a new visa, but DHS acknowledged that linkage cannot be achieved broadly to yield an accurate list of visitors who overstayed. Lack of reliable address information and inability to locate visitors. Some visitors do not fill in destination address information on Form I-94 or they do so inadequately. A related issue that we reported in 2002 is DHS’s inability to obtain updated address information during each visitor’s stay; such information could be a valuable addition to the arrival, departure, and destination address information that is collected. Missing departure forms. We reported in 1995 that “airlines are responsible for collecting . . . departure forms when visitors leave . . . . But for some visitors who may have actually left the United States record of the departures.” DHS acknowledges that this is still a concern, that the situation is analogous for cruise lines, and that noncollection is a larger problem for land exits. Our recent work has also drawn attention to identity fraud, demonstrating how persons presenting fraudulent documents (bearing a name other than their own) to DHS inspectors could enter the United States. Visitors whose fraudulent documents pass inspection could record a name other than their own on their I-94 form. In our current work, we have identified two further weaknesses in the tracking system. One weakness is the inability to match some departure forms back to corresponding arrival forms. DHS has suggested that when a visitor loses the original departure form, matching is less certain because it can no longer be based on identical numbers printed on the top and bottom halves of the original form. The other weakness is that at land ports (and possibly airports and seaports), the collection of departure forms is vulnerable to manipulation—in other words, visitors could make it appear that they had left when they had not. To illustrate, on bridges where toll collectors accept I-94 departure forms at the Southwestern border, a person departing the United States by land could hand in someone else’s I-94 form. Because of these weaknesses, DHS has no accurate list of overstays to send to consular officials or DHS inspectors. This limits DHS’s ability to consider past overstaying when issuing new visas or allowing visitors to reenter. More generally, the lack of an accurate list limits prevention and enforcement options. For example, accurate data on overstays and other visitors might help define patterns to better differentiate visa applicants with higher overstay risk. And without an accurate list and updated addresses, it is not possible to identify and locate new overstays to remind them of penalties for not departing. Such efforts fall under the category of interior enforcement: As we previously testified, “historically . . . over five times more resources in terms of staff and budget border enforcement than . . . interior enforcement.” Despite large numbers of overstays, current efforts to deport them are generally limited to (1) criminals and smugglers, (2) employees identified as illegal at critical infrastructure locations, and (3) persons included in special control efforts such as the domestic registration (or “call in” component) of the NSEERS program (the National Security Entry and Exit Registration System). DHS statisticians told us that for fiscal year 2002, the risk of arrest for all overstays was less than 2 percent. For most other overstays (that is, for persons not in the targeted groups), the risk of deportation is considerably lower. The effect of tracking system weaknesses on overstay data is illustrated by the inaccurate—and, according to DHS, inflated—lists of what it terms “apparent overstays” and “confirmed overstays.” For fiscal year 2001 arrivals, the system yielded a list of 6.5 million “apparent overstays” for which DHS had no departure record that matched the arrivals and an additional list of a half million “confirmed overstays,” or visits that ended after the visitors’ initial periods of admission expired (see appendixes I and II). However, DHS has no way of knowing how many of the 6.5 million are real cases of overstaying and how many are false (because some of these visitors had, for example, departed or legally changed their status). Even the half million “confirmed overstays” are not all true cases of overstaying, because some visitors may have legally extended their periods of admission. In the past, we made a number of recommendations that directly or indirectly address some of these system weaknesses, but these recommendations have not been implemented or have been only partially implemented. (Of these, four key recommendations are in appendix III.) DHS has begun two initiatives intended to remedy some of the weaknesses we have discussed. DHS recently began, as part of NSEERS, an effort to register visitors at points of entry (POE) to the United States, conduct intermittent interviews with registered visitors while they are here, and have government inspectors register departures. But the POE effort does not cover most visitors because it focuses on persons born in only eight countries. Moreover, NSEERS procedures do not involve inspectors’ observing departures—for example, registration occurs not at airport departure gates but at another location at the airport. Also, inspectors do not generally accompany registrants to observe their boarding. US-VISIT, the U.S. Visitor and Immigrant Status Indicator Technology, is DHS’s new tracking system intended to improve entry-exit data. The first phase of US-VISIT, now being rolled out, uses passenger and crew manifest data, as well as biometrics, to verify foreign visitors’ identities at airports and seaports. DHS plans three additional phases and will link its data to other systems that contain data about foreign nationals. If successfully designed and implemented, US-VISIT could avoid many of the weaknesses associated with the Form I-94 system. We believe special efforts are needed to ensure US-VISIT’s success. DHS concurred with our recent report, pointing to risks and the need for improved management of US-VISIT. For example, we reported that, among other issues, “important aspects defining the program’s operating environment are not yet decided facility needs are unclear and challenging.” Our recommendations included, among others, that DHS develop acquisition management controls and a risk management plan for US-VISIT, as well as defining performance standards. We also believe that checking US-VISIT’s program design against the weaknesses of the Form I-94 system, outlined here, might help in evaluating the program and ensuring its success. Tracking system weaknesses may encourage overstaying on the part of visitors and potential terrorists who legally enter the United States. Once here, terrorists may overstay or use other stratagems—such as exiting and reentering (to obtain a new authorized period of admission) or applying for a change of status—to extend their stay. As shown in table 1, three of the six pilots and apparent leaders were out of status on or before 9/11, two because of short-term overstaying. Additionally, a current overstay recently pled guilty to identity document fraud in connection with the 9/11 hijackers. Two others with a history of overstaying were recently convicted of crimes connected to terrorism (money-laundering and providing material support to terrorists); both had overstayed for long periods. Terrorists who enter as legal visitors are hidden within the much larger populations of all legal visitors, overstays, and other illegals such as border crossers. Improved tracking could help counterterrorism investigators and prosecutors track them and prosecute them, particularly in cases in which suspicious individuals are placed on watch lists after they enter the country. The director of the Foreign Terrorist Tracking Task Force told us that he considered overstay tracking data helpful. For example, these data—together with additional analysis—can be important in quickly and efficiently determining whether suspected terrorists were in the United States at specific times. As we reported earlier this year, between “September 11 and November 9, 2001 , . . . INS compiled a list of aliens whose characteristics were similar to those of the hijackers” in types of visa, countries issuing their passports, and dates of entry into the United States. While the list of aliens was part of an effort to identify and locate specific persons for investigative interviews, it contained duplicate names and data entry errors. In other words, poor data hampered the government’s efforts to obtain information in a national emergency, and investigators turned to private sector information. Reporting earlier that INS data “could not be fully relied on to locate many aliens who were of interest to the United States,” we had indicated that the Form I-94 system is relevant, stressing the need for improved change-of-address notification requirements. INS generally concurred with our findings. DHS has declared that combating fraudulent employment at critical infrastructures, such as airports, is a priority for domestic security. DHS has planned and ongoing efforts to identify illegal workers in key jobs at various infrastructures (for example, airport workers with security badges). These sweeps are thought to reduce the nation’s vulnerability to terrorism, because, as experts have told us, (1) security badges issued on the basis of fraudulent IDs constitute security breaches, and (2) overstays and other illegals working in such facilities might be hesitant to report suspicious activities for fear of drawing authorities’ attention to themselves or they might be vulnerable to compromise. Operation Tarmac swept 106 airports and identified 4,271 illegal immigrants who had misused Social Security numbers and identity documents in obtaining airport jobs and security badges. A much smaller number of airport employees had misrepresented their criminal histories in order to obtain their jobs and badges. The illegal immigrant workers with access to secure airport areas were employed by airlines (for example, at Washington Dulles International Airport and Ronald Reagan Washington National Airport, this included American, Atlantic Coast, Delta, Northwest, and United Airlines as well as SwissAir and British Airways) and by a variety of other companies (for example, Federal Express and Ogden Services). Job descriptions included, among others, aircraft maintenance technician, airline agent, airline cabin service attendant, airplane fueler, baggage handler, cargo operations manager, electrician, janitorial supervisor, member of a cleaning crew, predeparture screener, ramp agent, and skycap. In the large majority of these cases, identity fraud or counterfeit IDs were involved; without fraud or counterfeit documents, illegal workers would not have been able to obtain the jobs and badges allowing them access to secure areas. As we discussed earlier in this testimony, when we obtained data on the specific immigration status of workers who were arrested or scheduled for deportation at 14 Operation Tarmac airports, we found that a substantial number were overstays. A DHS official told us that Operation Tarmac is likely not to have identified all illegal aliens working in secure areas of airports. Weaknesses in DHS’s current overstay tracking system and the magnitude of the overstay problem make it more difficult to ensure domestic security. DHS has recently initiated two efforts to develop improved systems, but challenges remain. Designing and implementing a viable and effective tracking system is a critical component of the nation’s domestic security and continues to be a DHS priority. Viewing our results in the context of our nation’s layered defense, we believe that improvements in the tracking system must work together with other factors—such as intelligence, investigation, and information-sharing—to help ensure domestic security. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Committee may have. For information regarding this testimony, please contact Nancy R. Kingsbury, Managing Director, Applied Research and Methods, on 202-512-2700. Individuals who made key contributions to this testimony are Donna Heivilin, Judy Droitcour, Daniel Rodriguez, and Eric M. Larson. Annual “overstay cases” (a mixture of real and false cases) Total “overstay cases” for visitors for visitors who arrived by Excludes many Mexicans or Canadians who, visiting for business and pleasure, are exempt from Form I-94 procedures. Most, but not all, visitors from Permanent Visa Waiver countries enter under this program. Visa waiver countries in this tally are Andorra, Australia, Austria, Belgium, Brunei, Denmark, Finland, France, Germany, Iceland, Ireland, Italy, Japan, Liechtenstein, Luxembourg, Monaco, Netherlands, New Zealand, Norway, Portugal, San Marino, Singapore, Slovenia, Spain, Sweden, Switzerland, and United Kingdom. (Excludes Argentina and Uruguay, which were visa waiver countries in fiscal year 2001.) The 25 countries in the NSEERS domestic registration program include (1) 8 countries also subject to point-of-entry (POE) registration (Iran, Iraq, Libya, Pakistan, Saudi Arabia, Sudan, Syria, and Yemen) and (2) 17 other countries (Afghanistan, Algeria, Bahrain, Bangladesh, Egypt, Eritrea, Indonesia, Jordan, Kuwait, Lebanon, Morocco, North Korea, Oman, Qatar, Somalia, Tunisia, and United Arab Emirates). The 123,000 total “overstay cases” (all modes of arrival) from these countries in fiscal year 2001 include approximately 49,000 cases from the countries subject to POE registration and approximately 73,000 cases from the other countries, excluding North Korea. The data exclude North Korea from the NSEERS countries tally because DHS did not provide information separately for North and South Korea. 1. We recommended that to improve the collection of departure forms, the Commissioner of the Immigration and Naturalization Service should ensure that INS examine the quality control of the Nonimmigrant Information System database and determine why departure forms are not being recorded. For example, this could involve examining a sample of the passenger manifest lists of flights with foreign destinations to determine the extent of airline compliance and possibly developing penalties on airlines for noncompliance. Discovery of the incidence of various causes of departure loss could allow more precise estimation of their occurrence and development of possible remedies. (U.S. General Accounting Office, Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates, GAO/PEMD-93-25 (Washington, D.C.: Aug. 5, 1993).) INS agreed in principle with our recommendation to study why departure forms are not being collected and subsequently initiated a pilot project that was criticized by the Department of Justice Inspector General and then discontinued. DHS has not told us of any further efforts to study or determine why departure forms are not being collected. 2. We recommended that the Commissioner of INS should have new overstay estimates prepared for air arrivals from all countries, using improved estimation procedures such as those discussed in this report, including, as appropriate, the potential improvements suggested by INS or by reviewers of this report. (U.S. General Accounting Office, Illegal Immigration: INS Overstay Estimation Methods Need Improvement, GAO/PEMD-95-20 (Washington, D.C.: Sept. 26, 1995).) INS initially concurred and produced revised estimates as part of its comments on our report. However, in our response to INS’s comments, we described the new estimates as a “first step” and identified concerns about INS’s methodological procedures that we said needed further study. DHS told us that it has not further studied making overstay estimates by air arrivals. Valid estimation of overstays is extremely difficult, given current tracking system weaknesses. 3. We recommended that to promote compliance with the change of address notification requirements through publicity and enforcement and to improve the reliability of its alien address data, the Attorney General should direct the INS Commissioner to identify and implement an effective means to publicize the change of address notification requirement nationwide. INS should make sure that, as part of its publicity effort, aliens are provided with information on how to comply with this requirement, including where information may be available and the location of change of address forms. (U.S. General Accounting Office, Homeland Security: INS Cannot Locate Many Aliens because It Lacks Reliable Address Information, GAO-03- 188 (Washington, D.C.: Nov. 21, 2002).) INS/DHS concurred with this recommendation and has identified it as a long-term strategy that will require 2 years to fully implement. It has been less than a year since we made this recommendation, and thus there has not been sufficient time for DHS to implement it fully or for us to review that implementation. 4. We recommended that to provide better information on H-1B workers and their status changes, the Secretary of DHS take actions to ensure that information on prior visa status and occupations for permanent residents and other employment-related visa holders is consistently entered into current tracking systems and that such information become integrated with entry and departure information when planned tracking systems are complete. (U.S. General Accounting Office, H-1B Foreign Workers: Better Tracking Needed to Help Determine H-1B Program’s Effects on U.S. Workforce, GAO-03-883 (Washington, D.C.: Sept. 10, 2003).) DHS concurred with this recommendation, made just a month ago. Sufficient time has not elapsed for DHS to implement this recommendation. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year, millions of visitors, foreign students, and immigrants come to the United States. Visitors may enter on a legal temporary basis--that is, with an authorized period of admission that expires on a specific date--either (1) with temporary visas (generally for tourism,business,or work) or, in some cases (2) as tourists or business visitors who are allowed to enter without visas. (The latter group includes Canadians and qualified visitors from 27 countries who enter under the visa waiver program.) The majority of visitors who are tracked depart on time, but others overstay. Four of the 9/11 hijackers who entered the United States with legal visas overstayed their authorized periods of admission. This has heightened attention to issues such as (1) the extent of overstaying, (2) weaknesses in our current overstay tracking system, and (3) how the tracking system weaknesses and the level of overstaying might affect domestic security. Significant numbers of foreign visitors overstay their authorized periods of admission. The Department of Homeland Security estimates the resident overstay population at 2.3 million as of January 2000. Because the starting point for this estimate is the 2000 census, it does not cover short-term overstays who have not established residence here. It also omits an unknown number of potential long-term overstays from Mexico and Canada. Because of unresolved weaknesses in DHS's current system for tracking arrivals and departures (e.g.,noncollection of some departure forms and inability to match other departure forms to arrivals), there is no accurate list of overstays. Two new tracking initiatives are intended to address these weaknesses. NSEERS, the National Security Entry and Exit Registration System, does not cover most visitors. US-VISIT, the U.S. Visitor and Immigrant Status Indicator Technology, a more comprehensive,automated program, is being phased in. While its design and implementation face a number of challenges, evaluating US-VISIT against the weaknesses GAO identifies here would increase its potential for success. The current tracking system's weaknesses limit control options and make it difficult to monitor potential terrorists who enter the country legally. Like other illegal immigrants, overstays obtain jobs with fraudulent identity documents, including jobs at critical infrastructure locations, such as airports. Thus, tracking issues can affect domestic security and are one component of a layered national defense. Improving the tracking system could work with intelligence, investigation, information-sharing, and other factors to help counter threats from foreign terrorists.
We, MedPAC, and the Congressional Budget Office (CBO) have all suggested that CMS profile physician resource use and provide feedback to physicians as a step toward improving the efficiency of care financed by Medicare. In July 2008, Congress passed MIPPA, which directed the Secretary of HHS to establish a program by January 1, 2009, to provide physicians confidential feedback on the Medicare resources used to provide care to beneficiaries. MIPPA gave HHS the flexibility to measure resource use on a per capita basis, an episode basis, or both. In response to the MIPPA mandate, CMS is pursuing its Physician Resource Use Measurement and Reporting Program. (See table 1.) When profiling physicians on their resource use, five key decisions must be made: Which resource use measurement methodology to use. There are two main profiling methodologies: per capita and episode-based. Using both types of measures of resource use may provide more meaningful results by more fully capturing the relevant characteristics of a physician’s practice patterns. How to account for differences in patient health status. Accounting for differences in patient health status, a process sometimes referred to as risk-adjustment, is an important and challenging aspect of physician profiling. Because sicker patients are expected to use more health care resources than healthier patients, we believe the health status of patients must be taken into account to make meaningful comparisons among physicians. There are various risk-adjustment methods and the suitability of a given method will depend on characteristics of the physicians to be profiled and their patients. How to attribute resource use to physicians. Important attribution decisions include whether to assign a patient’s resource use to the single physician who bears the greatest responsibility for the resource use, to all physicians who bore any responsibility, or to all physicians who met a given threshold of responsibility, such as providing a certain percentage of the expenditures or volume of services. A single attribution approach may not be applicable for all types of measures or for all types of physician specialties. What benchmark(s) to use. Physician profiling involves comparing physicians’ resource use to a benchmark. There are differing opinions on what are the most appropriate and meaningful comparative benchmarks. How to determine what is a sufficient sample size to ensure meaningful comparisons. The feasibility of using resource use measures to compare physicians’ performance depends, in part, on two factors: the availability of enough data on each physician to compute a resource use measure and a sufficient number of physicians to provide meaningful comparisons. It is important to calculate resource use measures only for physicians with sufficient sample sizes in order to address concerns that a physician’s profile may be distorted by a few aberrant cases. There is no consensus on what sample size is adequate to ensure meaningful measures. Responding to the MIPPA mandate to establish a physician feedback program by January 1, 2009, CMS began in April 2008 to develop its program for reporting to physicians on their resource use. In the first phase of the program, CMS identified eight priority conditions and disseminated approximately 310 Resource Use Reports to physicians in selected specialties who practiced in one of 13 geographic areas. The reports generally included both per capita and episode-based resource use measures that were calculated according to five different attribution rules. The reports also contained multiple cost benchmarks relative to physicians in the same specialty and geographic area. In Phase II, CMS is proposing to expand the program by adding quality measures and reporting on groups of physicians as a mechanism for addressing small sample size issues. Using a per capita profiling method, we found that from 2005 to 2006, specialist physicians showed considerable stability in their practice patterns, as measured by resource use—greater stability than their patients, despite high patient turnover. We also found that our per capita method can differentiate specialists’ patterns of resource use with respect to different types of services, such as institutional services, which were a major factor in beneficiaries’ resource use. In particular, patients of high resource use physicians used more institutional services than patients of low resource use physicians. Using a per capita method to profile specialist physicians, we found that their practice patterns, as measured by the level of their resource use, was relatively stable over 2005 and 2006 by comparison with individual beneficiaries’ resource use (see figure 1). This is true despite the fact that our measure of physicians’ resource use is derived from their patients’ resource use and that the specific patients whom physicians see are not always the same from year to year. Among the physicians we studied, less than one-third of patients seen by study physicians in 2005 were also seen by the same physician in 2006. This stability suggests that per capita resource use is a reasonable approach for profiling physicians, because it reflects distinct patterns of a physician’s resource use, not the particular population of beneficiaries seen by a physician in a given year. We divided both physician and beneficiary resource use into five groups of approximately equal size (quintiles) and found that, on average across the four metropolitan areas and four specialties, 58 percent of physicians and 30 percent of beneficiaries were in the same quintile of resource use in 2005 and 2006. The pattern was even more pronounced for the top resource use quintile: 72 percent of physicians and 35 percent of beneficiaries remained in that quintile. If the level of physicians’ and beneficiaries’ resource use was purely random, only 20 percent would be expected to have remained in the same quintile. We also examined the stability of physicians’ resource use by specialty and found a similar pattern, although not to the same extent in all specialties. The average percentage of physicians who were in the same resource use quintile in 2005 and 2006 ranged from 48 percent for orthopedic surgeons to 60 percent for internists. Resource use in the top quintile was more stable and ranged from 69 percent for diagnostic radiologists to 74 percent for internists. (See table 2.) In each of the four metropolitan areas, physicians showed greater stability in their resource use than individual beneficiaries, although the percentages varied. For example, the percentage of physicians remaining in the top quintile ranged from 68 percent in Phoenix to 76 percent in Miami. For beneficiaries, the percentage in the top quintile ranged from 31 percent in Phoenix to 39 percent in Miami. (See table 3.) The greater stability of physicians’ resource use compared to beneficiaries’ resource use could be due to their individual practice styles, as well as to a range of other factors, such as participation in formal or informal referral networks. These networks have a range of providers, including other physicians, who treat their patients and refer them for treatment, testing, and admissions to hospitals. Beneficiaries seen by high resource use physicians generally were heavier users of institutional services than those seen by lower resource use physicians, and institutional services accounted for more than one-half of total patient expenditures. This pattern was consistent across three of the four specialties we studied, with orthopedic surgery being the exception. Institutional services were the major driver of Medicare expenditures for beneficiaries in physicians’ practices, accounting on average for 54 percent of expenditures. Services provided by a particular physician in our study directly to that physician’s patients accounted for only 2 percent of total expenditures or about $350 for each beneficiary in a physician’s practice. All other services—those provided by other physicians, home health care, hospice care, outpatient services, and durable medical equipment—accounted for the remaining 44 percent of expenditures. (See fig. 2.) Expenditures for institutional services for a physician’s patients grew as the level of physician resource use increased. Dividing the level of physician resource use into quintiles, we examined the relationship of physicians’ resource use and expenditures for services provided to their patients. Average expenditures for institutional services increased more steeply by physician resource quintile than expenditures for all other services. The four specialties all exhibited this pattern of increasing beneficiary expenditures for institutional services accompanying increasing physician resource use, although for orthopedic surgery the increase was small. The increase in average beneficiary expenditures for all other services that accompanied increasing physician resource use was similar for three of the four specialties and was steeper for internal medicine. We also examined the average number of physicians seen by the Medicare beneficiaries we studied and found that it was positively associated with increasing physician resource use. Overall, the number of physicians seen increased from an average of about 13 physicians per beneficiary in the lowest quintile of resource use to more than 23 in the highest. The increase in the number of physicians seen was accompanied by an increase in average beneficiary expenditures for institutional services that was steeper than the rise in other services. Through our review of selected literature and interviews with officials of health insurance companies, specialty societies, and profiling experts, we identified several key considerations in developing reports to provide feedback to physicians on their performance, including their per capita resource use. We also drew on information from these sources to develop an example of how per capita measures could be presented in a physician feedback report. We identified four key considerations in developing reports to provide feedback to physicians (see table 4). Our review of selected literature suggested that a physician feedback report should contain three basic elements: an explanation of the information contained in the report (which we will discuss in the context of transparency), measures describing the performance of the physician or physicians to whom the report is directed, and comparative benchmarks. Measures. Both the selected literature we reviewed and the officials we interviewed supported including measures of quality along with measures of cost, and ensuring that measures are actionable by providing information that can help physicians improve their performance. The officials we interviewed were divided as to whether these measures should reflect physicians’ performance at the individual level or the group level. Quality measures. All five of the insurers we contacted were profiling physicians in terms of quality and cost, and four of the five had adopted a model code for physician ranking programs that called for rankings to be based on quality as well as cost. Most of the specialty society officials we interviewed also called for the inclusion of quality measures in physician feedback reports, and some cautioned that focusing solely on costs could create perverse incentives—for example, encouraging physicians to reduce inappropriately the level of care provided to patients. The lack of widely accepted, claims-based quality measures for some specialties has limited the number of specialties some insurers profile. For example, at the time of our interview, one insurer was profiling physicians in only one specialty (cardiology) while planning to begin profiling other specialties within a year. Actionable measures. According to one research report we reviewed, little research has been done to determine how the reporting of global scores—such as an overall per capita cost rank—influences physician behavior, but experts on physician profiling and a broad array of stakeholders, including physicians and insurance company officials, agreed that performance data should be disaggregated into enough categories to enable physicians to identify practice patterns to change. According to some profiling experts, resource use reports must pinpoint physicians’ overuse and misuse of resources, and identify practices that add costs but do not improve desired outcomes. Similarly, specialty society officials we interviewed emphasized the importance of including measures that focus on areas in which the physician has control. Individual versus group measures. Another measurement consideration is whether physicians in group practices should be profiled as individuals or as a group. The insurers we contacted took varying approaches. In some cases, the approach was driven by contracting arrangements, with insurers constructing group profiles for physicians with whom they had group contracts. One insurance company official pointed out that profiling at the group level allows more physicians to be profiled, as it increases the data available to construct a profile. Another official advocated profiling at the individual level because he believes physicians are more interested in assessments of individual performance. Officials of the four specialty societies generally saw some merit to both approaches, but some underscored the difficulty of identifying group affiliations or noted that groups are not necessarily homogeneous enough for a group assessment to be appropriate. Comparative benchmarks. One consideration addressed by multiple publications we reviewed was the kind of benchmark to which physicians’ performance should be compared. For example, a physician’s performance may be compared to (1) an evidence-based standard, (2) a standard based on professional judgment, such as the consensus standards endorsed by the National Quality Forum, or (3) to a statistical norm, such as the average for a physician’s peers locally or nationally. Although studies we reviewed offered conflicting evidence as to whether including peer comparisons in physician feedback reports increases their effectiveness, some profiling experts and specialty society officials believe comparative information is useful and of interest to physicians. In the literature we reviewed, for example, one profiling expert suggested that such comparisons can motivate behavior change by taking advantage of physicians’ desire to perform at least as well as their peers; another stated that performance statistics are not meaningful to physicians without peer comparisons. A physician’s peer group can be defined in various ways. According to one study, some organizations that provide performance feedback to physicians have found comparisons within specialty and locality most useful to and most frequently requested by physicians. Representatives of some of these organizations said physicians find local information more relevant because it reflects the practice patterns of their geographic area. All five insurers we contacted compare physicians to others in the same market and specialty; one of the five also compares physicians to peers nationwide on some measures. In contrast, officials of all four specialty societies recommended comparisons at the national level, with officials of one society stating that there is no scientific basis for regional variations in practice patterns. There was less agreement about whether physicians should be compared to others in their specialty or to a more narrowly defined group. Officials of one specialty society advocated comparisons at the subspecialty level in recognition of the variation in resource use patterns among subspecialists. Another official pointed out that such comparison groups could be difficult to define because physicians in some specialties tend to have multiple subspecialties. Because views differ on appropriate comparison groups, one hospital-owned healthcare alliance plans to incorporate in its physician reports a customizable feature that will allow users to select the peer comparison they wish to see. Comparisons to physicians’ own past performance (trend data) are commonly presented in feedback reports, and the majority of physicians surveyed in one study found these comparisons useful. The selected literature we reviewed offered little hard evidence on how feedback reports should be designed to engage physicians’ interest or to prove their comprehension of the material. However, researchers and profiling experts offered some comments and suggestions based either on their experience with clinical performance measurement or on an analysis of the literature on consumer behavior and its possible implications for physician reporting (see table 5). The amount and combination of material that should be included in a single report is an important consideration. According to one publication that summarized a review of multiple feedback reports, some organizations issue separate reports on efficiency/cost and effectiveness/clinical quality, in part to avoid diluting the impact of either set of measures. Others believe a single report gives physicians a more complete picture of their performance. Officials of the three insurers we contacted that routinely issued feedback reports to physicians said that their companies produced summary reports, typically one to two pages in length, containing high-level information, but also made more detailed information, such as patient- level data, available to physicians. One insurer’s summary report consisted of one page of cost efficiency measures and one page of effectiveness measures. The cost efficiency page presented average cost per episode of care by service category for the physician and the physician’s peer group, as well as the ratio of the two, in both tabular and graphic form. The effectiveness page presented process-of-care measures for selected conditions, including cardiovascular disease and asthma. Company officials said summary reports were limited to two pages to accommodate physicians’ attention spans and that the two sets of measures were presented separately to discourage attempts to link the two. Specialty society officials agreed reports should be short—most proposed one to two pages—and strongly recommended that information be presented graphically to the extent possible. One official, noting that physicians are very visually oriented, recommended feedback reports consisting mainly of easily understood graphics. The selected literature we reviewed, our interviews with specialty society officials, and existing physician feedback reports suggested reports can be kept short by segmenting some information into separate documents—for example, a cover letter that explains the report’s purpose, a description of the profiling methodology, a set of frequently asked questions, and a list of definitions. Some key considerations with respect to report dissemination are which physicians should receive reports, how frequently to issue reports, and whether to issue reports in hardcopy or electronically. Which physicians should receive feedback reports. One major decision is whether to issue reports to all physicians for whom performance measures can be calculated or only to a subset who fail to meet certain performance standards—a decision that may involve weighing reporting costs against potential impacts. None of the studies we reviewed directly addressed this issue, but all of the specialty society officials we interviewed advised sending reports to all or nearly all physicians, rather than just to poor performers. They gave several reasons: to provide positive recognition to physicians who are performing well; to avoid singling out certain physicians as poor performers, especially on the basis of excess costs over which they have little control; and to create opportunities for voluntary peer-to-peer learning among physicians who are at different points along the performance spectrum. Similarly, all three of the insurers that routinely issued feedback reports sent them to all physicians for whom they had performance measures. Frequency of reporting. According to one book we reviewed, organizations that provide feedback to physicians should do so more than once a year to give physicians an opportunity to improve their performance in a timely manner. However, because of the time needed to gather sufficient data to identify trends and patterns of performance, many organizations provide feedback no more than twice a year. Of the two insurers that told us how frequently they issued feedback reports, one did so annually and the other at least every 6 months. Officials of the latter company said the frequency of their reporting was limited by the number of claims in their dataset and suggested that CMS would not face the same limitations. Hardcopy versus electronic dissemination. Reports can be disseminated in hardcopy through various channels, such as the mail, or electronically, through e-mail or a Web site. One literature scan we reviewed cited certain advantages of electronic formats such as Web- based applications. Specifically, they allow users to organize information as they choose and are well suited to presenting data from the general to the specific, which facilitates information processing. Although this report noted some concerns about physicians’ access to the Internet, according to a report based on a national survey of physicians in December 2002 and January 2003, almost all respondents said they had Internet access, and most said they considered it important for patient care. Of the three insurers that routinely issued feedback reports, two issued them electronically and one issued them in hardcopy. Officials of the latter company said that staff typically hand-delivered the reports to physicians during on-site visits in order to discuss the results. Officials of most of the specialty societies we contacted did not advocate one dissemination mode over the other, but some noted that organizations that issue reports electronically must confront certain challenges, such as ensuring that security features do not make access difficult, addressing the lack of high- speed Internet service in some areas, and determining whether to send reports by e-mail or to instruct physicians to access them on the Internet. One specialty society official recommended using both modes of dissemination to accommodate different preferences. Both the selected literature we reviewed and our interviews with officials from insurance companies and specialty societies underscored the importance of ensuring transparency regarding the purpose of the report and the methodology and data used to construct performance measures. Purpose. According to one literature scan, feedback reports should explicitly state their purpose—for example, to reduce costs, improve quality, or simply to provide information—and should highlight any items for which the physician will be held accountable. Methodology. Two important considerations are where to provide information about methodology—whether in the report itself or through some other mechanism, such as a Web page—and how much technical detail to provide. Some of the insurers we contacted provide information on-line about their profiling methodologies, including details about measures, attribution of care to physicians, risk adjustment, and statistical issues. In addition, some of the officials we interviewed said that company staff will meet with physicians to explain the profiling methodology, if requested. For example, officials of one company said that it has on staff four profiling experts, mostly nurses, in addition to about 20 medical directors who can answer physicians’ questions. Specialty society officials we interviewed highlighted a potential trade-off between providing enough information in the report to persuade physicians of the validity of the measures and keeping the report concise enough to maintain physicians’ interest. All of the officials we interviewed agreed that physicians should have access to details about the methodology; some suggested this information might best be disseminated through a Web site. Explaining how the data are risk-adjusted to account for differences in physicians’ patient populations was cited by specialty society officials as particularly important. Data. Another consideration is ensuring transparency with regard to the data used in profiling—making patient-level detail available so physicians can reconcile performance measures with their own information about their practices. All five of the health insurers we contacted provided opportunities for physicians to examine patient-level data and file appeals before results are made public, although their processes or policies for doing so varied (see table 6). Officials of one of the two insurers that made detailed data available on- line said their company previously sent hardcopy reports to physicians, but learned from medical office managers that they would prefer an on- line format that could be manipulated to facilitate physician comparisons. Officials of the other insurer said that their company planned to make the data available in a manipulatable format soon. Most of the specialty society officials we interviewed agreed that patient-level data should be made available to physicians, but some predicted that few physicians would access them. Two interviewees suggested practice size would probably be a factor; one added that physicians in smaller groups would likely lack the resources and skills to analyze the data. Drawing upon lessons culled from the literature and our interviews, we developed a mock report that illustrates how per capita measures could be included in a physician feedback report. Such a report could also include other measures such as quality measures and episode-based resource use measures. We included two types of per capita measures—risk-adjusted cost ranks and risk-adjusted utilization rates—each presented with local and national comparative benchmarks. To provide further context, we also included per capita measures showing how the average Medicare costs of patients the physician treated at least once were distributed among service categories, and the percentage of those costs that were for services directly provided by the physician to whom the report is directed. We kept the mock report under two pages and included minimal text, while ensuring transparency by indicating the availability of methodology details and supporting data. To accommodate physicians’ differing dissemination preferences, we designed the mock report to be available in both electronic and hardcopy formats. (See fig. 3.) Specialty society officials who vetted a draft of the mock report made several recommendations. Some recommendations centered on taking advantage of electronic capabilities, such as adding hovers to define key terms (see fig. 4), creating interactive features to let physicians explore “what if” scenarios, and including links to educational materials and specialty guidelines. Officials also recommended adding information on pharmaceutical costs, a category we did not include because not all beneficiaries are enrolled in a Medicare Part D prescription drug plan. A patient’s risk adjusted cost rank is calculated by comparing the patient’s Medicare costs to all other Cityville patients with similar risk scores and represents how unexpectedly expensive or inexpensive the patient’s Medicare-covered care was. Your rank is the average rank of all patients you treated at least once. See Glossary for more details. All providers: All providers: 100% of total 100% of total ($13,422) ($13,422) You: You: 11% of category 11% of category ($1,449) ($1,449) More generally, specialty society officials said that they particularly liked the graphs and charts in our mock report. One official added that our report was easier to understand than other reports he had seen and that he thought it would get physicians’ attention. Another official commented how the presented per capita measures could give physicians insight on the care their patients are receiving that they were not previously aware of—a perspective other cost measures could not provide. However, multiple officials said the measures as presented were too broad to be actionable and might not seem relevant to physicians, as most physicians feel responsible only for the costs of services they directly order or provide, not for the total cost of patients’ care. Two officials suggested that these per capita measures would have more value in health care systems that emphasized coordination of care. Our review of available literature on the effectiveness of physician feedback suggests that feedback alone generally has no more than a moderate influence on physician behavior. However, the potential influence of feedback from CMS regarding Medicare costs is uncertain, and may be greater than that of feedback from other sources, because Medicare reimbursement typically represents a larger share of physicians’ practice revenues than that from other insurers. In general, studies examining the effect of feedback on physicians’ behavior have found it to have a small to moderate effect. Factors that appear to influence the effectiveness of feedback include its source, frequency, and intensity. For example, one review of the literature concluded that physicians were more likely to be influenced by reports from a source they expected to continue monitoring their performance. This review also found that repeated feedback over a period of several years may be more likely to get physicians’ attention. Another review reported that the intensity of the feedback appeared to influence its effectiveness. The review cited individual, written feedback containing information about costs or numbers of tests, but no personal incentives, as among the least intensive, and therefore likely to be among the least effective approaches. Consistent with the literature we reviewed, most of the insurance company officials we interviewed questioned whether providing performance feedback to physicians would have a significant impact on the physicians’ behavior in the absence of other incentives. While all five insurers profiled physicians, none used the results solely to provide feedback. Officials of four of the five insurance companies said that to affect physicians’ behavior, profiling results must be made public, thus influencing patients’ choice of physicians, or linked to monetary incentives, as in pay-for-performance arrangements. However, officials of one company disagreed, stating that feedback alone can affect physicians’ behavior if the reports show how they rank against their peers and make clear what behavior they need to change to improve their efficiency. These officials also said that the impact of feedback could depend on the size of physicians’ practices and whether they have the resources to review the reports and the management structure to affect changes. Whether the experiences of private insurers or the lessons from the literature on the influence of feedback will hold in the case of the Medicare program is uncertain. A survey conducted in 2004-2005 found that, for most physicians, Medicare represented more than one-quarter of practice revenue, and for 17 percent of physicians, the proportion was more than one-half. Because physicians typically contract with a dozen or more health insurance plans, few, if any, of these plans are likely t o represent as large a share of physicians’ practice revenue as Medicare. Hence, the impact of feedback from CMS might be greater than that from other sources. In addition, one profiling expert suggested that physicians might expect feedback from CMS to be only the first step in efforts to influence physicians’ behavior—to be followed, for example, by public reporting of profiling results. This perspective comports with recommendations in our earlier report. providing feedback on a confidential basis would be an appropriate first step. One said it would allow time to test the profiling methodology and gauge physicians’ reactions; the other said it would provide an opportunity for physicians to vet the measures and identify any errors. See GAO-07-307. suggestions for enhancing its effectiveness. Other suggestions can be drawn from the literature we reviewed. These suggestions included: providing advance notice of feedback reports (through presentations, letters, or other communications) to help ensure that physicians open and read the reports; working through credible intermediaries, such as medical societies or locally prominent physicians, to assure physicians that the feedback process is reasonable and legitimate; providing opportunities for physicians to discuss the reports through videoconferences, teleconferences, or on-line discussion groups; and offering in-person follow up, possibly drawing on the resources of the Medicare Quality Improvement Organizations. Involving physicians in the development of a feedback system may also enhance its effectiveness. One literature scan concluded that physician involvement in system design was vital for obtaining physician buy-in. Information from insurers suggested that, although physicians may not always be involved in initial development of feedback systems, their feedback can prompt modifications. Some insurance officials we interviewed described an iterative process involving ongoing communication with physicians and continuous modification of reports and systems. For example, officials of one insurance company said that the company did not seek initial input from physicians—in the belief that they would not have been able to provide much input without a complete understanding of the data and methodology—but took into account physicians’ responses to earlier, less formal systems. Officials of other companies described various mechanisms for obtaining physicians’ perspectives, including formal physician advisory councils, regular meetings with officials of national medical societies, and town hall meetings with physicians at the local level. Profiling physicians to improve efficiency is used by some private insurance companies and, at the direction of Congress, is being adopted by the Medicare program. We believe that a per capita methodology is a useful approach to profiling physicians on their practice efficiency and could be part of a feedback program that could also include quality measures and episode-based resource use measures. Our findings are consistent with those of our previous report on physician profiling in which, through analysis of physician practice patterns, we determined that CMS could use profiling to improve the efficiency of Medicare. Despite a more diverse mix of physician specialties in our present analysis, and with certain exceptions noted in our findings, we found substantial consistency in certain patterns we observed across metropolitan areas and specialties. We also found consistency across time in that physicians who showed high resource use in one year tended to stay high in the subsequent year. We provided a draft of this report to the HHS for comment and received written comments from CMS, which are reprinted in appendix II. We also solicited comments on the draft report from representatives of the American Academy of Orthopaedic Surgeons (AAOS), the American College of Cardiology (ACC), the American College of Physicians, and the American College of Radiology. We received oral comments from the first two. Our draft report did not include any recommendations for CMS to respond to. CMS broadly agreed with each of our three findings: CMS agreed that the per capita methodology is a useful approach to measuring physicians’ resource use and noted that per capita measurement is one of the cost of care measures included in CMS’s Physician Resource Use Management and Reporting Program. CMS also agreed that the consistency of our per capita measure across years is an important finding and stated that the agency intends to examine measure consistency in the ongoing administration of its program. CMS found the attention in our report to considerations for developing a physician feedback system to be particularly helpful. CMS listed several examples of how its program already addresses many of these considerations and is in the process of addressing others. We agree with CMS that some of the approaches described in our report would require significant resources and recognize that CMS will need to investigate how to balance the trade-offs between different approaches in order to best leverage its resources. CMS agreed that physician feedback may have a moderate influence on physician behavior. CMS further stated its commitment to developing meaningful, actionable, and fair measurement tools for physician resource use that, along with quality measures, will provide a comprehensive assessment of performance. We continue to believe that providing physicians feedback on their performance could be a promising step toward encouraging greater efficiency in Medicare; however, we are still concerned that efforts to achieve greater efficiency that rely solely on physician feedback without financial or other incentives will be suboptimal. CMS also provided technical comments, which we incorporated as appropriate. The representatives of AAOS and ACC raised no major issues with regard to the substance of the report. The AAOS representative said that the report captured well the key aspects of physician profiling and the key considerations in developing physician feedback reports. The ACC representatives endorsed the overall approach of a feedback report consisting of a high-level summary accompanied by additional sections with greater detail and a separate document that explains the methodology in detail. The representatives of both groups said that physicians should be provided feedback on both quality and resource use, but differed on whether they should be presented in the same report. Both groups also stressed that physicians should only be compared to physicians within their specialty or subspecialty. Both the AAOS and the ACC representatives commented on the design of our mock report. Both said that the measures of physician resource use by type of service and the benchmark comparisons were easy to understand. They had difficulty, however, in understanding a related measure that shows the physician’s share of payments by service category. We did not alter our mock report in response to these comments, but believe that the concerns they expressed should be taken into account by organizations designing physician feedback reports. The representatives of both groups stressed the importance of risk adjustment in the measurement of physician resource use and suggested that we include a fuller explanation of risk adjustment techniques in our report. We did not expand our explanation of such techniques because they are not the focus of this report; however, we acknowledge the important role played by risk adjustment techniques in constructing physician feedback reports on resource use. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Acting Administrator of CMS, committees, and others. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7114 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix III. This appendix describes the per capita methodology that we used to measure beneficiaries’ and physicians’ Medicare fee-for-service (FFS) resource use. We focused our analysis on four diverse specialties: a medical specialty (cardiology), a diagnostic specialty (diagnostic radiology), a primary care specialty (internal medicine), and a surgical specialty (orthopedic surgery). We included diagnostic radiologists in our study because they are less amenable to episode grouping, the major alternative to per capita profiling of physicians. We limited our analysis to physicians in these specialties who practiced in one of four areas: Miami, Fla.; Phoenix, Ariz.; Pittsburgh, Pa.; and Sacramento, Calif. We chose these areas for their geographic diversity, range in average Medicare spending per beneficiary, and number of physicians in each of the four specialties. Our results apply only to the four specialties in the four metropolitan areas we studied. To conduct our analysis, we obtained 2005 and 2006 Centers for Medicare & Medicaid Services (CMS) data from the following sources: (1) Medicare claims files that include data on physician, durable medical equipment, skilled nursing, home health, hospice, and hospital inpatient and outpatient services; (2) Denominator File, a database that contains enrollment and entitlement status information for all Medicare beneficiaries in a given year; (3) Hierarchical Condition Category (HCC) files that summarize Medicare beneficiaries’ diagnoses; (4) files summarizing the institutional status of beneficiaries; and (5) Unique Physician Identification Number Directory, which contains information on physicians’ specialties. In order to develop a resource use measure that accounts for differences in health status between beneficiaries, we developed a risk adjustment model that uses an individual’s diagnoses during the year to estimate the total Medicare FFS expenditures expected for the individual in that year. As our inputs to the model, we used the same 70 HCCs as those in the model CMS uses to set managed care capitation rates. HCCs are a way of summarizing an individual’s diagnoses into major medical conditions, such as vascular disease or severe head injury. To estimate our model, we used HCC and expenditure data for 2005 and 2006 five percent national samples of Medicare FFS beneficiaries. For all Medicare FFS beneficiaries who received at least one service in 2005 or 2006 from a physician located in any of our four metropolitan areas and who also did not meet our exclusion criteria (see footnote 5), we used our risk adjustment model to estimate their total expected Medicare FFS expenditures. Based on their expected expenditures, we placed beneficiaries into 1 of 25 discrete risk categories. The categories were ordered in terms of health status from healthiest (category 1) to sickest (category 25). Next, within each risk category and metropolitan area, we ranked beneficiaries from 1 to 100 by their total actual annual Medicare expenditures, such that the average beneficiary in a given risk category and metropolitan area had a rank of 50. We used this rank as our risk-adjusted measure of beneficiary resource use. To examine the stability of beneficiaries’ resource use, we divided the 2005 and 2006 beneficiary populations into five ascending groups of nearly equal size (quintiles) based on the level of their resource use. We then identified beneficiaries in each of the four metropolitan areas who saw a physician in their area in 2005 and again in 2006. We measured the stability of beneficiaries’ resource use as the percentage of beneficiaries who remained in the same quintile in 2006 that they were in during 2005. In addition, we determined the percentage of beneficiaries who remained in the highest resource quintile. For the purposes of this study, we defined a physician’s practice as all Medicare FFS beneficiaries who did not meet our exclusion criteria and who had at least one evaluation and management visit with the physician during the calendar year for cardiologists, internists, and orthopedic surgeons, or who received any service from the physician for diagnostic radiologists. To ensure that a physician’s resource use measure would not be overly influenced by a few patients with unusually high or low Medicare expenditures, we excluded physicians with small practices— those who treated fewer than 100 of the Medicare patients in our study during the year. For all physicians, we calculated the average beneficiary resource use rank of the patients in their practices, which ranged from a low of 26.0 to a high of 91.8 in 2006. Next, within each metropolitan area and specialty, we ranked physicians on the basis of this average from 1 to 100 such that the average measure of physician resource use was 50. We used this rank as our measure of physician resource use. This measure reflects how expensive a physician’s patients are compared to the patients of other physicians in the same specialty and area after adjusting for differences in patient health status. For example, a cardiologist in Miami is only compared to other cardiologists in Miami. To examine physicians’ resource use, we divided the physicians into five ascending groups (quintiles) of nearly equal size based on the measure of their resource use described above. In the same manner as we measured the stability of beneficiaries’ resource use, we measured the stability of physicians’ resource use by determining the percentage of them who remained in the same physician resource use quintile from 2005 to 2006. We also measured the degree of turnover in the patients seen by physicians by computing the percentage of patients seen in 2005 by each physician that were also seen by the same physician in 2006. We examined utilization patterns by physician resource use quintile by decomposing the 2006 Medicare expenditures of physicians’ patients into those for institutional services (inpatient hospital and skilled nursing care), those for services provided directly by the physician to his or her patients, and those for all other services—outpatient hospital, home health care, hospice care, durable medical equipment, and all other Part B services of Part B providers and suppliers. We also measured the number of physicians seen by a physicians’ patients by physician resource use quintile. Although our measure of a beneficiary’s resource use is independent of the beneficiary’s health status, there was an association between physician resource use and the mix of healthy and sick patients in physicians’ practices—physicians who ranked high in terms of resource use also treated a larger proportion of beneficiaries who were in poor health than did physicians who ranked low in resource use. However, the resource use of all their patients was also consistently higher than that of low resource use physicians’ patients regardless of patient health status. Figure 5 shows the average resource use of beneficiaries in five health status categories across the five physician resource use quintiles. For example, patients in the healthiest category who were treated by physicians in the highest resource use quintile had an average resource use rank of 74, whereas similarly healthy patients treated by physicians in the lowest quintile had average resource use rank of 53. This ordering of the differences in patient resource use by the level of physician resource use is repeated across all health categories. It indicates that physicians have consistent patterns of resource use with respect to all of their patients, regardless of their patients’ health status. The mix of healthy and sick patients in physicians’ practices did not affect the positive relationship we found between average institutional expenditures per beneficiary and physician resource use level. Within each beneficiary health category, the patients of high resource use physicians had average institutional expenditures that exceeded those of the patients of physicians with lower resource use. Similar analyses showed that patient mix did not affect (1) the positive relationship between physicians’ resource use and the average number of physicians seen by their patients, (2) the positive relationship between physicians’ resource use and expenditures for all other services provided their patients, and (3) the steeper rise in the use of institutional services by physicians’ patients with increasing physician resource use as compared to the rise in the use of all other services. A. Bruce Steinwald, (202) 512-7114, or steinwalda@gao.gov. In addition to the contact named above, Phyllis Thorburn, Assistant Director; Alison Binkowski; Nancy Fasciano; Richard Lipinski; Drew Long; Jessica Smith; Maya Tholandi; and Eric Wedum made key contributions to this report. Balas, E. Andrew, Suzanne Austin Boren, Gordon D. Brown, Bernard G. Ewigman, Joyce A. Mitchell, and Gerald T. Perkoff. “Effect of Physician Profiling on Utilization: Meta-analysis of Randomized Clinical Trials.” Journal of General Internal Medicine, vol. 11, no. 10 (1996): 584-590. Beckman, Howard B., Anthony L. Suchman, Kathleen Curtin, and Robert A. Greene. “Physician Reactions to Quantitative Individual Performance Reports.” American Journal of Medical Quality, vol. 21, no. 3 (2006): 192- 199. Beckman, Howard B., Thomas Mahoney, and Robert A. Greene. Current Approaches to Improving the Value of Care: A Physician’s Perspective. The Commonwealth Fund. November 2007. Hartig, J.R., and Jeroan J. Allison. “Physician Performance Improvement: An Overview of Methodologies.” Clinical and Experimental Rheumatology, vol. 25, supplement 47 (2007): S50-S54. Jamtvedt, Gro, Jane M. Young, Doris Tove Kristoffersen, Mary Ann O’Brien, and Andrew D. Oxman. “Audit and Feedback: Effects on Professional Practice and Health Care Outcomes.” Cochrane Database of Systematic Reviews, no. 2 (2006). Jamtvedt, Gro, Jane M. Young, Doris Tove Kristoffersen, Mary Ann O’Brien, and Andrew D. Oxman. “Does Telling People What They Have Been Doing Change What They Do? A Systematic Review of the Effects of Audit and Feedback.” Quality & Safety in Health Care, vol. 15, no. 6 (2006): 433-436. Kiefe, Catarina I., Jeroan J. Allison, O. Dale Williams, Sharina D. Person, Michael T. Weaver, and Norman W. Weissman. “Improving Quality Improvement Using Achievable Benchmarks for Physician Feedback: A Randomized Controlled Trial.” JAMA, vol. 285, no. 22 (2001): 2871-2879. Marder, Robert J., Mark A. Smith, and Richard A. Sheff. “Changing Physician Practice: Providing Physicians Useful Feedback.” In Effective Peer Review: A Practical Guide to Contemporary Design, 2nd ed., 153- 164. Marblehead, Mass.: HCPro, Inc., 2007. Micklitsch, Christine N. and Theresa A. Ryan-Mitlyng. Physician Performance Management: Tool for Survival and Success. Englewood, Colo.: Medical Group Management Association, 1996. Mold, James W., Cheryl A. Aspy, and Zsolt Nagykaldi. “Implementation of Evidence-Based Preventive Services Delivery Processes in Primary Care: An Oklahoma Physicians Resource/Research Network (OKPRN) Study.” Journal of the American Board of Family Medicine, vol. 21, no. 4 (2008): 334-344. Nathanson, Philip. “Influencing Physician Practice Patterns,” Topics in Health Care Financing, vol. 20, no. 4 (1994): 16-25. Pacific Business Group on Health. Advancing Physician Performance Measurement: Using Administrative Data to Assess Physician Quality and Efficiency. September 2005. Paxton, E. Scott, Barton H. Hamilton, Vivian R. Boyd, and Bruce L. Hall. “Impact of Isolated Clinical Performance Feedback on Clinical Productivity of an Academic Surgical Faculty.” Journal of American College of Surgeons, vol. 202, no. 5 (2006): 737-745. Teleki, Stephanie S., Rebecca Shaw, Cheryl L. Damberg, and Elizabeth A. McGlynn. Providing Performance Feedback to Individual Physicians: Current Practice and Emerging Lessons. RAND Health Working Paper Series. July 2006. Van Hoof, Thomas J., David A. Pearson, Tierney E. Giannotti, Janet P. Tate, Anne Elwell, Judith K. Barr, and Thomas P. Meehan. “Lessons Learned from Performance Feedback by a Quality Improvement Organization.” Journal for Healthcare Quality, vol. 28, no. 3 (2006): 20-31. Veloski, Jon, James R. Boex, Margaret J. Grasberger, Adam Evans, and Daniel B. Wolfson. “Systematic Review of the Literature on Assessment, Feedback and Physicians’ Clinical Performance: BEME Guide No. 7.” Medical Teacher, vol. 28, no. 2 (2006): 117-128.
The Medicare Improvements for Patients and Providers Act of 2008 directed the Secretary of Health and Human Services to develop a program to give physicians confidential feedback on the Medicare resources used to provide care to Medicare beneficiaries. GAO was asked to evaluate the per capita methodology for profiling physicians--a method which measures a patient's resource use over a fixed period of time and attributes that resource use to physicians--in order to assist the Centers for Medicare & Medicaid Services (CMS) with the development of a physician feedback approach. In response, this report examines (1) the extent to which physicians in selected specialties show stable practice patterns and how beneficiary utilization of services varies by physician resource use level; (2) factors to consider in developing feedback reports on physicians' performance, including per capita resource use; and (3) the extent to which feedback reports may influence physician behavior. GAO focused on four medical specialties and four metropolitan areas chosen for their geographic diversity and range in average Medicare spending per beneficiary. To identify considerations for developing a physician feedback system, GAO reviewed the literature and interviewed officials from health plans and specialty societies. Further, GAO drew upon literature and interviews to develop an illustration of how per capita measures could be included in a physician feedback report. Using 2005 and 2006 Medicare claims data and a per capita methodology, GAO found that specialist physicians showed considerable stability in resource use despite high patient turnover. This stability suggests that per capita resource use is a reasonable approach for profiling specialist physicians because it reflects distinct patterns of a physician's resource use, not the particular population of beneficiaries seen by a physician in a given year. GAO also found that our per capita method can differentiate specialists' patterns of resource use with respect to different types of services, such as institutional services, which were a major factor in beneficiaries' resource use. In particular, patients of high resource use physicians used more institutional services than patients of low resource use physicians. GAO identified four key considerations in developing feedback reports on physician performance. To illustrate how per capita measures could be included in a physician feedback report, we developed a mock report containing three types of per capita measures. Although the literature suggested that feedback alone has no more than a moderate influence on physicians' behavior, the potential influence of feedback from CMS on Medicare costs may be greater, in part because of the relatively large share of physicians' practice revenues that Medicare typically represents. CMS reviewed a draft of this report and broadly agreed with our findings.
The management of used electronics presents a number of environmental and health concerns. EPA estimates that only 15 to 20 percent of used electronics (by weight) are collected for reuse and recycling, and that the remainder of collected materials is primarily sent to U.S. landfills. While a survey conducted by the consumer electronics industry suggests that EPA’s data may underestimate the recycling rate, the industry survey confirms that the number of used electronics thrown away each year is in the tens of millions. As a result, valuable resources contained in electronics, including copper, gold, and aluminum, are lost for future use. Additionally, while modern landfills are designed to prevent leaking of toxic substances and contamination of groundwater, research shows that some types of electronics have the potential to leach toxic substances with known adverse health effects. Used electronics may also be exported for recycling or disposal. In August 2008, we reported that, while such exports can be handled responsibly in countries with effective regulatory regimes and by companies with advanced technologies, a substantial amount ends up in countries that lack the capacity to safely recycle and dispose of used electronics. We also have previously reported on the economic and other factors that inhibit recycling and reuse. For example, many recyclers charge fees because their costs exceed the revenue they receive from selling recycled commodities or refurbishing units. Household electronics, in particular, are typically older and more difficult to refurbish and resell, and, thus, may have less value than those from large institutions. In most states, it is easier and cheaper for consumers to dispose of household electronics at a local landfill. Moreover, as EPA and others have noted, the domestic infrastructure to recycle used electronics is limited, and the major markets for both recycled commodities and reusable equipment are overseas. The United States does not have a comprehensive national approach for the reuse and recycling of used electronics, and previous efforts to establish a national approach have been unsuccessful. Under the National Electronics Product Stewardship Initiative, a key previous effort that was initially funded by EPA, stakeholders met between 2001 and 2004, in part to develop a financing system to facilitate reuse and recycling. Stakeholders included representatives of federal, state, and local governments; electronics manufacturers, retailers, and recyclers; and environmental organizations. Yet despite broad agreement in principle, stakeholders in the process did not reach agreement on a uniform, nationwide financing system. For example, they did not reach agreement on a uniform system that would address the unique issues related to televisions, which have longer life spans and cost more to recycle than computers. In the absence of a national approach, some states have since addressed the management of used electronics through legislation or other means, and other stakeholders are engaged in a variety of voluntary efforts. In the 9 years that have passed since stakeholders initiated the National Electronics Product Stewardship Initiative in an ultimately unsuccessful attempt to develop a national financing system to facilitate the reuse and recycling of used electronics, 23 states have enacted some form of electronics recycling legislation. For example, some of these state laws established an electronics collection and recycling program and a mechanism for funding the cost of recycling (see fig. 1). The state laws represent a range of options for financing the cost of recycling and also differ in other respects, such as the scope of electronic devices covered under the recycling programs, with televisions, laptop computers, and computer monitors frequently among the covered electronic devices. Similarly, while the state laws generally cover used electronics generated by households, some laws also cover used electronics generated by small businesses, charities, and other entities. Five of the states—California, Maine, Minnesota, Texas, and Washington— represent some of the key differences in financing mechanisms. California was early to enact legislation and is the only state to require that electronics retailers collect a recycling fee from consumers at the time of purchase of a new electronic product covered under the law. These fees are deposited into a fund managed by the state and used to pay for the collection and recycling of used electronics. In contrast, the other four states have enacted legislation making manufacturers selling products in their jurisdictions responsible for recycling or for some or all of the recycling costs. Such laws are based on the concept of “producer responsibility” but implement the concept in different ways. In Maine, state-approved consolidators of covered used electronics bill individual manufacturers, with the amount billed for particular electronics being based in part either on the manufacturer’s market share of products sold or on the share of used electronics collected under the state’s program. Under the Minnesota law, manufacturers either must meet recycling targets by arranging and paying for the collection and recycling of an amount in weight based on a percentage of their sales or must pay recycling fees. Texas requires that manufacturers establish convenient “take-back” programs for their own brands of equipment. Finally, the Washington law requires that manufacturers establish and fund collection services that meet certain criteria for convenience, as well as transportation and recycling services. Table 1 summarizes the key characteristics of the electronics recycling legislation in these five states. As of June 2010, the remaining 27 states had not enacted legislation to establish electronics recycling programs. In some of these states, legislation concerning electronics recycling has been proposed, and some state legislatures have established commissions to study options for the management of used electronics. In addition, some of these states, as well as some of the states with recycling legislation, have banned certain used electronics, such as CRTs, from landfills. In states with no mechanism to finance the cost of recycling, some local governments that offer recycling bear the recycling costs and others charge fees to consumers. Also, some states have funded voluntary recycling efforts, such as collection events or related efforts organized by local governments. For example, Florida has provided grants to counties in the state to foster the development of an electronics recycling infrastructure. A variety of entities offer used electronics collection services, either for a fee or at no charge. Localities may organize collection events or collect used electronics at waste transfer stations. A number of electronics manufacturers and retailers support collection events and offer other services. For example, Best Buy offers free recycling of its own branded products and drop-off opportunities for other products at a charge that is offset by a store coupon of the same value; Dell and Goodwill Industries have established a partnership to provide free collection services at many Goodwill donation centers; and a number of electronics manufacturers collect used electronics through mail-back services offered to consumers. Some manufacturers and retailers also have made voluntary commitments to manage used electronics in an environmentally sound manner and to restrict exports of used electronics that they collect for recycling. EPA has taken some notable steps to augment its enforcement of regulations on exports of CRTs for recycling, but the export of other used electronics remains largely unregulated. In addition, the effect of EPA’s partnership programs on the management of used electronics, although positive, is limited or uncertain. To encourage the recycling and reuse of used CRTs, EPA amended its hazardous waste regulations under the Resource Conservation and Recovery Act by establishing streamlined management requirements. If certain conditions are met, the regulations exclude CRTs from the definition of solid waste and thereby from the regulations that apply to the management of hazardous waste. The conditions include a requirement that exporters of used CRTs for recycling notify EPA of an intended export before the shipments are scheduled to leave the United States and obtain consent from the importing country. In contrast, exporters of used, intact CRTs for reuse (as opposed to recycling) may submit a one-time notification to EPA and are not required to obtain consent from the importing country. The export provisions of the CRT rule became effective in January 2007. We reported in August 2008 that some companies had appeared to have easily circumvented the CRT rule, and that EPA had done little to enforce it. In particular, we posed as foreign buyers of broken CRTs, and 43 U.S. companies expressed a willingness to export these items. Some of the companies, including ones that publicly touted their exemplary environmental practices, were willing to export CRTs in apparent violation of the CRT rule. Despite the apparently widespread potential for violations, EPA did not issue its first administrative penalty complaint against a company for potentially illegal shipments until the rule had been in effect for 1½ years, and that penalty came as a result of a problem we had identified. In response to our prior report, EPA officials acknowledged some instances of noncompliance with the CRT rule but stated that, given the rule’s relative newness, their focus was on educating the regulated community. Since our prior report’s issuance, however, EPA has initiated investigations and taken several enforcement actions against companies that have violated the notice-and-consent requirement for export of CRTs for recycling. For example, in December 2009, the agency issued an order seeking penalties of up to $37,500 per day to a company that failed to properly manage a shipment of waste CRTs. According to EPA, the company did not provide appropriate notice to the agency or to China, the receiving country, where customs authorities rejected the shipment. Similarly, in December 2009, EPA announced that two companies that failed to notify the agency or receive written consent from China of a shipment of waste CRTs for recycling entered agreements with EPA, with one company agreeing to pay a fine of over $21,000. Despite steps to strengthen enforcement of the CRT rule, issues related to CRT exports and to exports of other used electronics remain. First, as we reported in August 2008, exports of CRTs for reuse in developing countries have sometimes included broken units that are instead dumped. EPA’s CRT rule does not allow such exports and requires that exporters keep copies of normal business records, such as contracts, demonstrating that each shipment of exported CRTs will be reused. However, the rule does not require exporters to test used equipment to verify that it is functional. Moreover, according to EPA, the agency has focused its investigations under the CRT rule on companies that have failed to provide export notifications altogether. In contrast, the agency has not yet conducted any follow-up on notifications of exports for reuse to protect against the dumping of nonworking CRTs in developing countries by ensuring that the CRTs companies are exporting are, in fact, suitable for reuse. Second, CRTs are the only electronic devices specifically regulated as hazardous waste under EPA’s Resource Conservation and Recovery Act regulations. Many other electronic devices, however, contain small amounts of toxic substances, and according to EPA, recent studies have shown that certain used electronics other than CRTs, such as some cell phones, sometime exceed the act’s regulatory criteria for toxicity when evaluated using hazardous waste test protocols. Finally, because one of the purposes of the Resource Conservation and Recovery Act is to promote reuse and recovery, EPA’s rules under the act exclude used electronics and disassembled component parts that are exported for reuse from the definition of “solid waste” and, therefore, from hazardous waste export requirements, regardless of whether the used electronics exceed the toxicity characteristic regulatory criteria. EPA has worked with electronics manufacturers, retailers, recyclers, state governments, environmental groups, and other stakeholders to promote partnership programs that address the environmentally sound management of used electronics. In addition, EPA comanages a program to encourage federal agencies and facilities to purchase environmentally preferable electronics and manage used electronics in an environmentally sound manner. Key programs include the following: Responsible Recycling practices. EPA convened electronics manufacturers, recyclers, and other stakeholders and provided funding to develop the Responsible Recycling (R2) practices, with the intent that electronics recyclers could obtain certification that they are voluntarily adhering to environmental, worker health and safety, and security practices. Certification to the R2 practices became available in late 2009. According to EPA officials, the R2 practices represent a significant accomplishment in that they provide a means for electronics recyclers to be recognized for voluntary commitments that, according to EPA, go beyond what the agency is able to legally require. The R2 practices identify “focus materials” in used electronics, such as CRTs or items containing mercury, that warrant greater care due to their toxicity or other potential adverse health or environmental effects when managed without the appropriate safeguards. The practices specify that recyclers (and each vendor in the recycling chain) export equipment and components containing focus materials only to countries that legally accept them. The practices also specify that recyclers document the legality of such exports. Upon request by exporters, EPA has agreed to help obtain documentation from foreign governments regarding whether focus materials can be legally imported into their country. Plug-In To eCycling. To promote opportunities for individuals to donate or recycle their used consumer electronics, EPA began to partner with electronics manufacturers, retailers, and mobile service providers in 2003. Under the Plug-In To eCycling program, partners commit to ensuring that the electronics refurbishers and recyclers they use follow guidelines developed by EPA for the protection of human health and the environment. Among other things, the current guidelines call for minimizing incineration and landfill disposal and for ensuring that exports comply with requirements in importing countries. According to EPA, Plug-In To eCycling partners have collected and recycled steadily increasing quantities of used electronics, and some partners have expanded the collection opportunities they offer to consumers (e.g., from occasional events to permanent locations). Electronic Product Environmental Assessment Tool. Developed under a grant from EPA and launched in 2006, the Electronic Product Environmental Assessment Tool (EPEAT) helps purchasers select and compare computers and monitors on the basis of their environmental attributes. EPEAT evaluates electronic products against a set of required and optional criteria in a number of categories, including end-of-life management. To qualify for registration under EPEAT, the sale of all covered products to institutions must include the option to purchase a take-back or recycling service that meets EPA’s Plug-In To eCycling recycling guidelines. Auditing of recycling services against the guidelines is an optional criterion. Currently, EPA is participating with other stakeholders in the development of additional standards covering televisions and imaging equipment, such as copiers and printers. Federal Electronics Challenge. To promote the responsible management of electronic products in the federal government, EPA comanages the Federal Electronics Challenge, a program to encourage federal agencies and facilities to purchase environmentally preferable electronic equipment, operate the equipment in an energy-efficient way, and manage used electronics in an environmentally sound manner. According to EPA, partners reported in 2009 that 96 percent of the computer desktops, laptops, and monitors they purchased or leased were EPEAT-registered, and that 83 percent of the electronics they took out of service were reused or recycled. One of the national goals of the Federal Electronics Challenge for 2010 is that 95 percent of the eligible electronic equipment purchased or leased by partnering agencies and facilities be registered under EPEAT. Another goal is that 100 percent of the non-reusable electronic equipment disposed of by partners be recycled using environmentally sound management practices. While EPA and other stakeholders have contributed to progress in the partnership programs, the impact of the programs on the management of used electronics is limited or uncertain. For example, the Plug-In To eCycling program does not (1) include a mechanism to verify that partners adhere to their commitment to manage used electronics in accordance with EPA’s guidelines for the protection of human health and the environment or (2) confirm the quantity of used electronics collected under the program. In addition, because the development of electronics purchasing and recycling standards is ongoing or only recently completed, it is too soon to determine how the standards will affect the management of used electronics collected from consumers. EPA officials told us that the agency lacks the authority to require electronics recyclers to adhere to the R2 practices, since most electronics are not hazardous waste under Resource Conservation and Recovery Act regulations. EPA participated in the development of the practices through a process open to a range of stakeholders concerned with the management of used electronics. Two environmental groups that participated in the process withdrew their support because the R2 practices failed to address their concerns (e.g., about the export of used electronics). As a result, one of the groups, the Basel Action Network, spearheaded the development of another standard (i.e., e-Stewards®) under which electronics recyclers may be certified on a voluntary basis. EPA is currently considering whether and how to reference such recycler certification standards in other programs, such as Plug-In To eCycling. Furthermore, EPEAT currently focuses on electronic products sold to institutions but not to individual consumers. In particular, the requirement that manufacturers of EPEAT-registered computers and monitors offer a take-back or recycling service to institutional purchasers does not currently apply to sales to individual consumers. According to an EPA official participating in development of the standards, EPA and other stakeholders plan to begin work in 2010 on expanding the standard for computer equipment into the consumer marketplace, and stakeholders are still discussing whether the new EPEAT standards for imaging equipment and televisions, which will cover electronics sold to individual consumers, will include a required or optional criterion for take back of such electronics. In October 2009, we reported that an increasing number of federal agencies and facilities has joined the Federal Electronics Challenge, but we also identified opportunities for higher levels of participation and noted that agencies and facilities that participate do not maximize the environmental benefits that can be achieved. We reported, for example, that agencies and facilities representing almost two-thirds of the federal workforce were not program partners, and that only two partners had reported to EPA that they managed electronic products in accordance with the goals for all three life-cycle phases—procurement, operation, and disposal. We concluded that the federal government, which purchases billions of dollars worth of information technology equipment and services annually, has the opportunity to leverage its substantial market power to enhance recycling infrastructures and stimulate markets for environmentally preferable electronic products by broadening and deepening agency and facility participation in the Federal Electronics Challenge. However, EPA has not systematically analyzed the agency’s partnership programs, such as the Federal Electronics Challenge, to determine whether the impact of each program could be augmented. To varying degrees, the entities regulated under the state electronics recycling laws—electronics manufacturers, retailers, and recyclers— consider the increasing number of laws to be a compliance burden. In contrast, in the five states we visited, state and local solid waste management officials expressed varying levels of satisfaction with individual state recycling programs, which they attributed more to the design and implementation of the programs, rather than to any burden caused by the state-by-state approach. (See app. II for a description of key elements of the electronics recycling programs in the five states.) Electronics manufacturers, retailers, and recyclers described various ways in which they are affected by the current state-by-state approach toward the management of used electronics, with manufacturers expressing the greatest concern about the lack of uniformity. The scope of manufacturers regulated under state electronics recycling laws, as well as how states define “manufacturer,” varies by state. The laws apply to both multinational corporations as well as small companies whose products may not be sold in every state and, depending on the law, to manufacturers of both information technology equipment and televisions. In some states, such as Maine and Washington, the number of regulated manufacturers is over 100. Because most state electronics recycling laws are based on the producer responsibility model, these laws, by design, assign manufacturers significant responsibility for financing and, in some states, for arranging the collection and recycling of used electronics. As a result, the two electronics manufacturer associations we interviewed, as well as eight of the nine individual manufacturers, told us that the state-by-state approach represents a significant compliance burden. The individual manufacturer that did not consider the state-by-state approach to be a significant burden explained that the company is not currently manufacturing covered electronic devices (specifically televisions) and, therefore, does not have responsibilities under most of the state laws. Depending on the specific provisions of state laws, examples of the duplicative requirements that individual manufacturers described as burdensome included paying annual registration fees to multiple state governments, submitting multiple reports to state environmental agencies, reviewing and paying invoices submitted by multiple recyclers, and conducting legal analyses of state laws to determine the responsibilities placed on manufacturers. A representative of a manufacturer of information technology equipment said that, due to the time needed to ensure compliance with differing state laws, the company cannot spend time on related activities, such as finding ways to reduce the cost of complying with the state laws or ensuring that electronics are recycled in an environmentally sound manner. Representatives of one manufacturer noted that even states with similar versions of producer responsibility legislation differ in terms of specific requirements, such as the scope of covered electronic devices, registration and reporting deadlines, and the types of information to be submitted. As a result, they said that they needed to conduct separate compliance efforts for each state, rather than implement a single compliance program. A few manufacturers also told us that their current compliance costs are in the millions of dollars and are increasing as more states enact electronics recycling legislation. For example, a Sony representative said that he expects the amount the company spends in 2010 to comply with the requirements in states with producer responsibility laws to increase almost sevenfold over the amount spent in 2008. While the producer responsibility model is based on the assumption that manufacturers pass along the cost of recycling to consumers in the form of higher prices, the electronics manufacturer associations, as well as individual manufacturers, described inefficiencies and higher costs created by the state-by-state approach that they said could be reduced through a uniform national approach. For example, the Consumer Electronics Association cited a 2006 report, which the association helped fund, on the costs that could be avoided under a hypothetical, single national approach. The report estimated that, with 20 different state programs, manufacturers would spend an additional $41 million each year, and that the total additional annual costs among all stakeholders— including manufacturers, retailers, recyclers, and state governments— would be about $125 million. Both the Consumer Electronics Association, most of whose members the association considers to be small electronics manufacturers, as well as the Information Technology Industry Council, which represents large manufacturers, told us that some provisions of state laws—such as registration fees that do not take into account the number of covered electronic devices sold in a state—can create a disproportionate burden on small manufacturers. For example, Maine’s law imposes a $3,000 annual registration fee on all manufacturers, regardless of size or sales volume. One small manufacturer told us that Maryland’s initial registration fee of $10,000 exceeded the company’s $200 profits from sales in the state. The manufacturer said that, if all 50 states imposed such fees, the company would not remain in business. Similarly, the need to analyze differing requirements in each state law requires staff resources that, unlike their larger counterparts, small manufacturers may lack. Despite the costs of complying with state electronics recycling legislation, representatives of the two electronics manufacturer associations we interviewed, as well as most of the individual manufacturers, told us that state laws based on the producer responsibility model have not led to the design of electronic products that are less toxic and more recyclable, which some states cite as one of the purposes for making manufacturers responsible for the management of used electronics. Manufacturers cited the following reasons for the lack of an impact on product design: the inability of manufacturers to anticipate how recycling practices and technologies may develop over time and incorporate those developments into the design of products that may be discarded only after years of use; some producer responsibility laws, such as in Minnesota and Washington, making individual manufacturers responsible for recycling not their own products but a general category of devices, including those designed by other manufacturers; and the greater impact of other factors on product design, such as consumer demand and the use by institutional purchasers of EPEAT to select and compare electronic devices on the basis of their environmental attributes. Retailers generally affected by state electronics recycling laws include national chains as well as small electronics shops. Some retailers, such as Best Buy, sell their own brand of covered electronic devices and are also classified as manufacturers under certain states’ laws. As an example of the number of retailers covered under the laws, information from the state of California indicates that over 15,000 retailers have registered to collect the state’s recycling fee, and state officials estimated that large retailers collect 80 percent of the revenues. While the requirements imposed by state electronics recycling legislation on retailers typically are less extensive than the requirements pertaining to manufacturers, representatives of national and state retail associations we interviewed, as well as individual electronics retailers, described ways that the state-by-state approach creates a compliance burden. For example, according to the Consumer Electronics Retailers Coalition, certain state requirements, such as prohibitions on selling the products of electronics manufacturers that have not complied with a state’s law, are difficult for large retailers to implement since they do not use state-specific networks for distributing products to their stores. Rather, electronic products are developed, marketed, and sold on a national and even global basis. Similarly, representatives of the Consumer Electronics Retailers Coalition, as well as the majority of individual retailers and state retail associations in the five states we visited, told us that state “point-of-sale” requirements to collect a fee (in California) or distribute information on recycling when consumers purchase an electronic product represents a burden (e.g., many retailers operate their point-of-sale systems out of a centralized location yet are required to meet differing requirements in each state). Some retailers also expressed concern that states have difficulty in enforcing requirements on Internet retailers and, as a result, that businesses with a physical presence in the state are disadvantaged. This point is supported by the Maine Department of Environmental Protection, which has indicated that the department lacks sufficient staff to ensure that retailers that sell exclusively on the Internet comply with the sales ban on products from noncompliant manufacturers. Retailers also expressed concerns over specific provisions of individual state laws. For example, representatives of the California Retailers Association said their members consider the state’s requirement to collect a recycling fee at the point of sale and remit the fee to the state to be particularly burdensome, even though the law allows retailers to retain 3 percent of the fee as reimbursement for their costs. One retailer explained that collecting the fee also generates resentment against the retailer among customers who are unaware of the state’s recycling law. Similarly, according to the Minnesota Retailers Association, retailers found it challenging to gather and report accurate sales data required to calculate manufacturer recycling targets under the state’s law. In response to concerns over collecting and reporting sales data, Minnesota amended its law to eliminate this requirement and to use national sales data instead. Retailers that sell their own brand of covered electronic devices and are classified as manufacturers under a particular state’s law must meet all requirements imposed on either type of entity. Similarly, Best Buy and other retailers that offer customers a take-back service for used electronics are considered authorized collectors under some state programs and, as a result, are subject to additional registration and reporting requirements. Best Buy officials told us they face unique challenges under the state-by-state approach because they participate in programs as a retailer; a manufacturer; and, in some cases, a collector. For example, the officials cited 47 annual reporting and registration deadlines to comply with requirements imposed on manufacturers, 19 annual reporting or review dates associated with retailer requirements, and 6 annual reporting or registration dates associated with collector requirements. Electronics recyclers range from large multinational corporations to small entities with a location in one state and encompass a range of business models. For example, some recyclers focus on “asset disposition”—that is, providing data destruction and computer refurbishment services to businesses and large institutions—and other recyclers focus on recovering valuable commodities, such as precious metals. The use of “downstream” vendors to process various components separated from electronics is common, and many of the downstream entities, such as those that recycle glass from CRTs, are located overseas. Numerous nonprofit organizations refurbish used computers for use by schools, low-income families, and other nonprofit organizations both in the United States and overseas. The degree to which the recyclers we interviewed expressed concerns about the state-by-state approach varied. While state laws have established differing registration, reporting, and record-keeping requirements for recyclers and, where specified, different methods of payment for the cost of recycling or collection, some recyclers said they are not generally impacted by such differences (e.g., they operate in only one state with electronics recycling legislation or they can cope with differing state requirements for environmentally sound management by adhering to the most stringent requirements). One recycler even pointed out that the existence of various state laws can create business opportunities. In particular, rather than attempt to develop their own programs to comply with differing state requirements, manufacturers may decide to contract with recyclers that may have greater familiarity with the provisions of different laws. In contrast, other recyclers expressed concern over the burden of meeting the requirements of differing state laws. Due to the differences among state laws and the programs as implemented, these recyclers may have to carry out different tasks in each state to be reimbursed, such as counting and sorting covered electronic devices by brand and invoicing individual manufacturers; marketing and selling the amount of used electronics they have processed to manufacturers that must meet recycling targets; and, in California, submitting recycling payment claims to the state government. One recycler told us that the differences among state laws create a disincentive for establishing operations in other states, while another mentioned how small variations among state laws can significantly affect a recycler’s capacity to do business in a state. Another recycler added that the state-by-state approach hinders the processing of large volumes of used electronics from households and the ability to generate economies of scale that would reduce recycling costs. Almost all of the electronics recyclers we interviewed, including those in each of the five states we studied in detail, told us that they are concerned about the ability of irresponsible recyclers to easily enter and undercut the market by charging low prices without processing the material in an environmentally sound manner. While such undercutting might persist even under a national approach to managing used electronics, the recyclers identified a number of factors in the state-by-state approach that magnify the problem, including their perception of a lack of enforcement by state environmental agencies. In addition, according to recyclers in California and Washington, some recyclers export—rather than domestically recycle—electronic devices not covered under the state laws, which is less costly and thereby gives them a competitive advantage over recyclers that do not engage in exports, even where legal. Some recyclers and refurbishers of used electronics told us that state laws foster recycling at the expense of reuse, even though refurbishment and reuse is viewed by EPA as being more environmentally friendly than recycling. Specifically, according to these stakeholders, some state programs focus on collecting and recycling used electronics but not refurbishing them, thereby creating a financial incentive to recycle used electronics that could otherwise be refurbished and reused. For example, in Minnesota, only the amount in weight of collected used electronics that is recycled counts toward manufacturers’ performance targets. According to one refurbisher in the state, this provision leads to the recycling of equipment that is in working condition and reusable. Similarly, California pays for the cost of collecting and recycling used electronics but not for refurbishment. In contrast, according to a Texas affiliate of Goodwill Industries that recycles and refurbishes used electronics, the state’s law promotes reuse of used electronics. For example, by requiring that manufacturers establish take-back programs but not setting recycling targets, the Texas law avoids creating an incentive to recycle used electronics that can be refurbished. In the five states that we selected for detailed review, state and local government officials expressed varying levels of satisfaction with their electronics recycling laws. In addition, while some state and local governments had participated in the National Electronics Product Stewardship Initiative in an attempt to develop a national financing system for electronics reuse and recycling, the state and local officials we interviewed generally said that the state-by-state approach had not hindered the successful implementation of electronics recycling programs in their jurisdictions. Rather, they attributed their level of satisfaction to the design of the programs, such as the degree to which the programs provide a financing source for collecting and recycling used electronics and the effectiveness of efforts to educate consumers. None of the five states had statewide data on collection rates prior to implementation of the electronics recycling programs to quantify the impact of the laws, but state and local officials provided a variety of anecdotal information to illustrate the laws’ impact, such as collection rates in local communities and trends in the dumping of used electronics on roadsides and other areas. Moreover, the experiences described by state and local officials in the five states illustrate that both general financing models—producer responsibility and a recycling fee paid by consumers—are viable and have the potential to ensure convenient collection opportunities. Local solid waste management officials in the five states we visited expressed varying levels of satisfaction with state electronics recycling legislation in terms of reducing their burden of managing used electronics. On one hand, local officials in Washington told us that the state’s law requiring that manufacturers establish a convenient collection network for the recycling of used electronics has been successful in increasing collection opportunities and relieving local governments of recycling costs. Similarly, local officials in California said the state’s use of a recycling fee for reimbursing collection and recycling costs had relieved their governments of the burden of managing used electronics by making it profitable for the private sector to provide collection and recycling services. On the other hand, according to local solid waste management officials in Texas, the lack of specific criteria in the provision of the state’s law requiring that manufacturers collect their own brands of used computer equipment limited the law’s impact on increasing the convenience of collection opportunities. In addition, the officials said the state government had not done enough to educate residents about the law. As a result, they said that local governments were still bearing the burden of managing used computer equipment. State and local solid waste management officials we interviewed from three states without electronics recycling legislation also expressed varying levels of satisfaction with their voluntary efforts to promote recycling under the state-by-state approach to managing used electronics. For example, a county hazardous waste coordinator in Florida said the county used funding from the state to establish an electronics recycling program that is self-sustaining and free to households, but he also said that the state-by-state approach is cumbersome. Similarly, Florida state officials said that every state county has recycling opportunities, although collection could be more convenient. A representative of the Association of State and Territorial Solid Waste Management Officials said that, without a mechanism to finance the cost of recycling used electronics, local governments that provide recycling opportunities may be bearing the cost of providing such services, which can impose a financial burden on communities. In addition, while most of the state and local officials we interviewed from states without legislation said that the state-by-state approach does not represent a burden, Arizona state officials pointed out an increased burden of ensuring the environmentally sound management of used electronics collected in a neighboring state (California) and shipped to their state, since California has an electronic waste law, but Arizona does not. While state environmental officials we interviewed agreed that the burden of the state-by-state approach falls primarily on the regulated industries, they also acknowledged a number of aspects of the state-by-state approach that limit or complicate their own efforts, including the following: The need to ensure that state programs do not pay for the recycling of used electronics from out of state. In California, where the state reimburses recyclers $0.39 per pound for the cost of collecting and recycling covered electronic devices, state environmental officials said that they have regularly denied 2 to 5 percent of the claims submitted by recyclers due to problems with documentation, and that some portion of the denied claims likely represents fraudulent claims for the recycling of used electronics collected from other states. To prevent the recycling fee paid by consumers in the state from being used to finance the cost of recycling used electronics from other states, California requires that collectors of used electronics (other than local governments or their agents) maintain a log that includes the name and address of persons who discard covered electronic devices, and the state checks the logs to ensure that it pays only for the recycling of devices generated within the state. California state officials responsible for implementing the electronics recycling legislation said that the time spent on ensuring this requirement is met is a significant contributor to their workload. State and local government officials in other states we visited also acknowledged the potential for their programs to finance the recycling of used electronics collected from out of state, but these officials did not consider the problem to be widespread or difficult to address. For example, a Maine official said that, as a standard practice, waste collection facilities in the state check the residency of individuals, including when the facilities collect used electronics for recycling. Ability to ensure compliance with state requirements for environmentally sound management. State environmental officials in the five states we visited described varying levels of oversight to ensure the environmentally sound management of used electronics collected under their programs. For example, California conducts annual inspections of recyclers approved under the state program. Among other things, the state’s inspection checklist covers the packaging and labeling of electronic devices, the training of personnel on how to handle waste, the tracking of waste shipments, and the procedures and protective equipment needed to manage the hazards associated with the treatment of electronic devices. In contrast, citing limited resources, officials in Minnesota said they rely on spot checks of large recyclers, and officials in Texas said they have prioritized regular, scheduled enforcement of other environmental regulations over the requirements adopted by the state for the recycling of electronics. Even in California, state officials said that their ability to ensure the environmentally sound management of waste shipped out of state is limited because, while covered devices must be dismantled in California to be eligible for a claim within the state’s payment system, residuals from the in-state dismantling and treatment of covered devices may be shipped out of state. Intact but noncovered electronic devices are not subject to the California program and hence may also be shipped out of state. The problem is exacerbated because many of the “downstream” vendors used to process materials separated from electronics are located overseas, which further limits the ability of state officials to ensure that recyclers are conducting due diligence on downstream vendors and that the materials are being managed in an environmentally sound manner. (See app. II for additional information on the requirements for environmentally sound management in the five states we studied in detail.) In each of the five states we visited, state environmental nonprofit organizations either advocated for the enactment of state electronics recycling legislation or have been active in tracking the implementation of the laws. In addition, a number of groups advocate on issues related to the management of used electronics on a national or international basis. For example, the Electronics TakeBack Coalition, which includes a number of nonprofit organizations, advocates for producer responsibility as a policy for promoting responsible recycling in the electronics industry, and the Basel Action Network works in opposition to exports of toxic wastes to developing counties. Like state and local government officials in the five states we visited, state environmental groups we interviewed described the design of the state recycling programs, rather than the state-by-state approach, as the primary factor in the success of the programs. Representatives of the state environmental groups in four of the five states—California, Maine, Minnesota, and Washington—said that they considered their state program to have been successful in providing convenient collection opportunities and in increasing the collection rates of used electronics. For example, citing a 2007 survey of Maine municipalities, a representative of the Natural Resources Council of Maine said that the collection opportunities under the state program are more convenient than anticipated, although convenience could be improved for some state residents. Similarly, a representative of Californians Against Waste said that the state’s recycling fee had resulted in convenient collection opportunities and in steadily increasing collection rates, and that a recycling fee paid by consumers is no less effective than the producer responsibility model in promoting the collection of used electronics. In contrast, echoing the results of a 2009 survey conducted by the organization, a Texas Campaign for the Environment representative said that the state’s law had not had a significant impact on the collection and recycling of used electronics, because both consumers and local solid waste management officials are unaware of the opportunities manufacturers are to provide under the law for the free collection and recycling of electronics discarded by households. In addition, the organization is critical of the fact that the Texas law does not cover televisions, and that the governor vetoed a bill that would have made television manufacturers responsible for recycling, including costs. Some environmental groups pointed out that, in and of itself, the ability of a state program to improve collection rates does not necessarily ensure that used electronics will be recycled in an environmentally sound manner. Key issues raised by environmental groups as complicating the effectiveness of state programs included a lack of adequate requirements for the environmentally sound management of used electronics or requirements that differ among states, limited state resources or oversight to ensure compliance with the requirements, and a lack of authority to address concerns about exports. For example, a representative of the Basel Action Network said that provisions in state laws regarding exports, such as those in California, could be challenged on constitutional grounds since the Constitution generally gives the federal government the authority to regulate commerce with foreign nations, thereby limiting states’ authorities to do so. Options to further promote the environmentally sound management of used electronics involve a number of basic policy considerations and encompass many variations. For the purposes of this report, we examined two endpoints on the spectrum of variations: (1) a continued reliance on state recycling programs supplemented by EPA’s partnership programs and (2) the establishment of federal standards for state electronics recycling programs. Further federal regulation of electronic waste exports is a potential component of either of these two approaches. Under a national approach for managing used electronics on the basis of a continuation of the current state-by-state approach, EPA’s partnership programs, such as Plug-In To eCycling, would supplement state efforts. Most used electronics would continue to be managed as solid waste under the Resource Conservation and Recovery Act, with a limited federal role. For example, beyond its establishment of minimum standards for solid waste landfills, EPA is authorized to provide technical assistance to state and local governments for the development of solid waste management plans and to develop suggested guidelines for solid waste management. EPA’s partnership programs can supplement state recycling efforts in a variety of ways. For example, Minnesota state environmental officials told us that they hope to incorporate the R2 practices into the state’s standards for the environmentally sound management of used electronics. However, as we have previously noted, the impact of the EPA’s promotion of partnership programs on the management of used electronics is limited or uncertain. Moreover, EPA does not have a plan for coordinating its efforts with state electronics recycling programs or for articulating how EPA’s partnership programs, taken together, can best assist stakeholders to achieve the environmentally sound management of used electronics. For example, while partnership programs such as Plug-In To eCycling can complement state programs, EPA does not have a plan for leveraging such programs to promote recycling opportunities in states without electronics recycling legislation. Among the key implications of a continuation of the state-by-state approach are a greater flexibility for states and a continuation of a patchwork of state recycling efforts, including some states with no electronics recycling requirements. Greater flexibility for states. This approach provides states with the greatest degree of flexibility to engage in recycling efforts that suit their particular needs and circumstances, whether through legislation or other mechanisms, such as grants for local communities. For example, according to local solid waste management officials in Texas, which has enacted electronics recycling legislation, the state has not banned the disposal of electronics in landfills, and the officials pointed to factors, such as the state’s landfill capacity, that would work against a landfill ban. In contrast, New Hampshire, which has limited landfill capacity, has banned the disposal of certain electronics in landfills but has not enacted a law that finances the recycling of used electronics. The state’s solid waste management official told us that the state’s approach had been successful in diverting a large amount of used electronics from disposal in landfills, and an official with the state’s municipal association told us that residents of the state accept that they must pay fees to cover the cost of waste disposal services, including electronics recycling. A state-by-state approach also allows for innovations among states in enacting electronics recycling legislation. For example, a representative of the Electronics TakeBack Coalition told us that state electronics recycling legislation can be effective in providing convenient collection opportunities and in increasing collection and recycling rates, but that more time is needed to be able to assess the impact of the state programs. The representative described the state programs as laboratories for testing variations in the models on which the programs are based, such as the use of recycling targets in the producer responsibility model, and for allowing the most effective variations to be identified. A continuation of the patchwork of state recycling efforts. While the state-by-state approach may provide states with greater regulatory flexibility, it does not address the concerns of manufacturers and other stakeholders who consider the state-by-state approach to be a significant compliance burden. The compliance burden may actually worsen as more states enact laws (e.g., the number of registration and reporting requirements imposed on manufacturers may increase). One manufacturer pointed out that, while some states have modeled their laws on those in other states, even in such cases, states may make changes to the model in ways that limit any efficiency from the similarities among multiple laws. In addition to creating a compliance burden, the state-by-state approach does not ensure a baseline in terms of promoting the environmentally sound reuse and recycling of used electronics, not only in states without electronics recycling legislation but also in states with legislation. For example, unlike some other state electronics recycling legislation, the Texas law does not require manufacturers to finance the recycling of televisions, which may require a cost incentive for recycling, since the cost of managing the leaded glass from televisions with CRTs may exceed the value of materials recycled from used equipment. Furthermore, the requirements for the environmentally sound management of used electronics vary among states, and state environmental agencies engage in varying levels of oversight to ensure environmentally sound management. For example, according to the state solid waste management official in New Hampshire, budget constraints prevent the state from being able to track what happens to used electronics after they are collected. Various stakeholder efforts are under way to help coordinate state programs and relieve the compliance burden, although some stakeholders have pointed to limitations of such efforts. In particular, in January 2010, a number of state environmental agencies and electronics manufacturers, retailers, and recyclers helped form the Electronics Recycling Coordination Clearinghouse, a forum for coordination and information exchange among the state and local agencies that are implementing electronics recycling laws and all impacted stakeholders. Examples of activities planned under the clearinghouse include collecting and maintaining data on collection volumes and creating a centralized location for receiving and processing manufacturer registrations and reports required under state laws. Other examples of stakeholder efforts to ease the compliance burden include the formation of the Electronic Manufacturers Recycling Management Company, a consortium of manufacturers that collaborate to develop recycling programs in states with electronics recycling legislation. In addition, individual states have made changes to their recycling programs’ legislation after identifying provisions in their laws that created unnecessary burdens for particular stakeholders. For example, Minnesota amended its law to remove the requirement that retailers annually report to each manufacturer the number of the manufacturer’s covered electronic devices sold to households in the state—a requirement that retailers found particularly burdensome. A number of stakeholders, however, including members of the Electronics Recycling Coordination Clearinghouse, have pointed to limitations of stakeholder efforts to coordinate state electronics recycling programs. According to representatives of the Consumer Electronics Association, concerns over federal antitrust provisions limit cooperation among manufacturers for the purpose of facilitating compliance with the state laws. For example, cooperation among manufacturers trying to minimize the cost of compliance would raise concerns among electronics recyclers about price-fixing. Similarly, the executive director of the National Center for Electronics Recycling, which manages the Electronics Recycling Coordination Clearinghouse, told us states are unlikely to make changes to harmonize basic elements of state laws that currently differ, such as the scope of covered electronic devices and the definitions of terms such as “manufacturer.” Under a national strategy based on the establishment of federal standards for state electronics recycling programs, federal legislation would be required. For the purpose of analysis, we assumed that the legislation would establish federal standards and provide for their implementation— for example, through a cooperative federalism approach whereby states could opt to assume responsibility for the standards or leave implementation to EPA, through incentives for states to develop complying programs, or through a combination of these options. Within this alternative, there are many issues that would need to be addressed. A primary issue of concern to many stakeholders is the degree to which the federal government would (1) establish minimum standards, allowing states to adopt stricter standards (thereby providing states with flexibility but also potentially increasing the compliance burden from the standpoint of regulated entities), or (2) establish fixed standards. Further issues include whether federal standards would focus on the elements of state electronics recycling laws that are potentially less controversial and have a likelihood of achieving efficiencies—such as data collection and manufacturer reporting and registration—or would focus on all of the elements, building on lessons learned from the various states. An overriding issue of concern to many stakeholders is the degree to which federal standards would be established as minimum standards, fixed standards, or some combination of the two. In this context, we have assumed that either minimum or fixed standards would, by definition, preempt less stringent state laws and lead to the establishment of programs in states that have not enacted electronics recycling legislation. Minimum standards would be intended to ensure that programs in every state met baseline requirements established by the federal government, while allowing flexibility to states that have enacted legislation meeting the minimum standards to continue with existing programs, some of which are well-established. In contrast, under fixed federal standards, states would not be able to establish standards either stricter or more lenient than the federal standards. Thus, fixed standards would offer relatively little flexibility, although states would still have regulatory authority in areas not covered by the federal standards. As we have previously reported, minimum standards are often designed to provide a baseline in areas such as environmental protection, vehicle safety, and working conditions. For example, a national approach based on minimum standards would be consistent with the authority given to EPA to regulate hazardous waste management under the Resource Conservation and Recovery Act, which allows for state requirements that are more stringent than those imposed by EPA. Such a strategy can be an option when the national objective requires that common minimum standards be in place in every state, but stricter state standards are workable. Conversely, fixed standards are an option when stricter state standards are not workable. For example, to provide national uniformity and thereby facilitate the increased collection and recycling of certain batteries, the Mercury-Containing and Rechargeable Battery Management Act does not allow states the option of establishing more stringent regulations regarding collection, storage, and transportation, although states can adopt and enforce standards for the recycling and disposal of such batteries that are more stringent than existing federal standards under the Resource Conservation and Recovery Act. Most manufacturers we interviewed told us they prefer fixed federal standards over minimum standards. For example, these manufacturers are concerned that many states would opt to exceed the minimum federal standards, leaving manufacturers responsible for complying with differing requirements, not only in the states that have electronics recycling legislation but also in the states currently without legislation. In contrast, most state government officials and environmental groups we interviewed told us that they would prefer minimum federal standards over fixed federal standards as a national approach for the management of used electronics. In addition, a representative of the National Conference of State Legislatures told us that the organization generally opposes federal preemption but accepts that in the area of environmental policy, the federal government often sets minimum standards. According to the representative, even if federal requirements were of a high standard, states may want the option to impose tougher standards if the need arises. Similarly, some legislative and executive branch officials in states with electronics recycling legislation expressed concern that federal standards for electronics recycling would be of a low standard. As a result, the officials said they want to preserve the ability of states to impose more stringent requirements. To help address manufacturer concerns about a continuation of the state- by-state approach under minimum standards, the federal government could encourage states not to exceed those standards. For example, establishing minimum standards that are relatively stringent might reduce the incentive for states to enact or maintain stricter requirements. Consistent with this view, some of the state electronics recycling laws, including those in four of the five states we studied in detail, contain provisions for discontinuing the state program if a federal law takes effect that meets specified conditions (e.g., establishing an equivalent national program). Based on our review of state electronics recycling legislation and discussions with stakeholders regarding a national strategy for the management of used electronics, we identified a range of issues that would need to be considered and could be addressed as part of the establishment of federal standards for state electronics recycling programs, including the following issues: The financing of recycling costs. A potential element in federal standards for state electronics recycling programs would be a mechanism for financing the cost of recycling. For example, representatives of the Consumer Electronics Association told us they support a national approach with a single financing mechanism. Similarly, the California and Washington laws stipulate that their programs be discontinued if a federal law takes effect that establishes a national program, but only if the federal law provides a financing mechanism for the collection and recycling of all electronic devices covered under their respective laws. While there are differences among their views, most stakeholders we interviewed, including some manufacturers, said they would prefer that any federal standards be based on some form of the producer responsibility model rather than on a recycling fee paid by consumers because, for example, they consider the producer responsibility model more efficient to implement in comparison with the resources devoted to collecting a recycling fee and reimbursing recyclers. Even California state government officials, who were generally pleased with what has been accomplished under the state’s recycling fee and payment model, expressed openness to the producer responsibility model. The level of support for producer responsibility represents a shift in the views of some manufacturers. In particular, representatives of the Information Technology Industry Council told us that television manufacturers previously supported a recycling fee paid by consumers because of the frequent turnover of television manufacturers and the problem of assigning recycling costs for legacy equipment whose original manufacturer is no longer in business, no longer makes televisions, or otherwise cannot be determined. According to the council, with only one state having enacted legislation based on a recycling fee, television and other manufacturers now support the producer responsibility model. The allocation of costs and responsibilities among stakeholders. Even under a producer responsibility model, stakeholders other than manufacturers would participate in the implementation of state electronics recycling legislation, and the costs of collecting and recycling used electronics can be assigned in different ways. For example, while they support the producer responsibility model, Information Technology Industry Council representatives have proposed that the model be based on “shared responsibility,” whereby various entities that profit from the sale of electronic devices—including electronics distributors, retailers, and other stakeholders—all contribute to the cost of collection and recycling. In a variation of the concept of shared responsibility, under Maine’s electronics recycling legislation participating local governments generally bear collection costs and manufacturers finance recycling costs. The way in which costs and responsibilities are allocated can also create inequities from the standpoint of certain stakeholders. For example, certain manufacturers may pay more or less than others depending on whether recycling costs are based on the weight of a manufacturer’s own brand of electronics collected for recycling (return share) or on the amount of a manufacturer’s new products sold (market share). Under a return share system, long-standing manufacturers bear a greater proportion of the costs in comparison with newer manufacturers with fewer used electronics in the waste stream. In contrast, a market share system can result in newer manufacturers with a large market share financing the recycling of products produced by their competitors. The division of federal and state responsibilities for implementation and enforcement. Federal standards can be implemented directly by a federal agency, by the states with some degree of federal oversight, or through state implementation in some states and direct federal implementation in others. For example, EPA develops hazardous waste regulations under the Resource Conservation and Recovery Act and has encouraged states to assume primary responsibility for implementation and enforcement through state adoption of the regulations, while EPA retains independent enforcement authority. Regarding used electronics, the division of responsibilities among the federal and state governments would have a direct bearing on EPA’s resource requirements. EPA has previously cautioned that assigning responsibilities to the agency—such as for registration of electronics manufacturers, retailers, and recyclers; collection of registration fees; approval of manufacturer recycling programs; and authorization of parallel state programs for electronics recycling—would be costly and time- consuming to implement. Similarly, a representative of the National Conference of State Legislatures said the organization would oppose any federal requirements that do not provide a source of funding to states for implementing the requirements, and a representative of the National Governors Association pointed out that states not currently having electronics recycling legislation would express concern about the administrative costs of implementing an electronics recycling program. Determination of the scope of covered electronic devices. Stakeholders have cited a variety of criteria for determining the scope of electronic devices covered by state recycling laws. For example, some stakeholders have cited the growing volume of used electronics in comparison with limited landfill capacity or the presence of toxic substances in many electronics. In contrast, other stakeholders have argued that cell phones and other mobile devices, which may contain toxic substances, should not be included with other used electronics (e.g., mobile devices can be easily collected through mail-back programs). As yet another alternative, stakeholders have cited the loss of valuable resources, such as precious metals, when used electronics are disposed in landfills, as well as the environmental benefits of extending the life of used electronics through refurbishment, as a key consideration in electronics recycling legislation. An issue closely related to the scope of covered electronic devices is the scope of entities whose used electronics are covered under programs for financing the cost of recycling. The state electronics recycling laws typically include used electronics from households, but some states also include other entities, such as small businesses and nonprofit organizations that may otherwise need to pay a fee to recycle used electronics in an environmentally sound manner, while California’s law is nontargeted and includes any user of a covered electronic device located within the state. In doing our work, we found that a potential component of either approach that we discuss for managing used electronics is a greater federal regulatory role over exports to (1) facilitate coordination with other countries to reduce the possibility of unsafe recycling or dumping and (2) address the limitations on the authority of states to regulate exports. Assuming a continuation of the factors that contribute to exports, such as a limited domestic infrastructure to recycle used electronics, an increase in collection rates resulting from electronics recycling laws, either at the state or federal level, is likely to lead to a corresponding increase in exports, absent any federal restrictions. While, as we have previously noted, exports can be handled responsibly in countries with effective regulatory regimes and by companies with advanced technologies, some of the increase in exports may end up in countries that lack safe recycling and disposal capacity. Exports of used electronics are subject to a range of state requirements and guidelines in the five states we visited. Nevertheless, many of the state officials we interviewed expressed support for federal action to limit harmful exports because, for example, states lack adequate authority and resources to address concerns about exports. Washington state officials noted that their governor vetoed a provision of the state’s electronic waste legislation that addressed exports of electronics collected under the program because of concerns about the state’s lack of authority to prohibit such exports. The governor instead called for federal legislation prohibiting the export of hazardous waste to countries that are not prepared to manage the waste. In addition, under “preferred standards” established by the state, recyclers can be contractually obligated to ensure that countries legally accept any imports of materials of concern. Washington state officials told us that establishing preferred standards helped the state partially address concerns about used electronics exports, notwithstanding potential limitations on the state’s authority, but that further federal regulation of exports would still be helpful. In our August 2008 report, we made two recommendations to EPA to strengthen the federal role in reducing harmful exports. First, we recommended that EPA consider ways to broaden its regulations under existing Resource Conservation and Recovery Act authority to address the export of used electronic devices that might not be classified as hazardous waste by current U.S. regulations but might threaten human health and the environment when unsafely disassembled overseas. For example, we suggested that EPA consider expanding the scope of the CRT rule to cover other exported used electronics and revising the regulatory definition of hazardous waste. Citing the time and legal complexities involved in broadening its regulations under the Resource Conservation and Recovery Act, EPA disagreed with our recommendation and instead expressed the agency’s support for addressing concerns about exports of used electronics through nonregulatory, voluntary approaches. However, EPA officials told us that the agency is taking another look at its existing authorities to regulate exports of other used electronics. Second, we recommended that the agency submit to Congress a legislative package for ratification of the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, a multilateral environmental agreement that aims to protect against the adverse effects resulting from transboundary movements of hazardous waste. Under the convention’s definition, a broader range of materials could be considered potentially hazardous, including some electronic devices. While the Senate provided its advice and consent to ratification in 1992, successive administrations have not submitted draft legislation to Congress giving EPA the necessary statutory authorities to implement the convention’s requirements in order to complete the ratification process. EPA officials explained that these needed additional authorities include, among others, the authority to control the scope of wastes covered by the Basel Convention, the authority to halt exports of hazardous waste if the agency believes they will not be handled in an environmentally sound manner, and the authority to take back shipments that cannot be handled in an environmentally sound manner in the importing country. EPA officials told us that the agency had developed a legislative proposal on more than one occasion under previous administrations but did not finalize any proposal with other federal agencies. According to these officials, finalizing the proposal requires coordination with a number of agencies, including the Department of State and the White House Council on Environmental Quality, which coordinates federal environmental efforts in the development of environmental policies and initiatives. In May 2010, the current EPA Administrator called for legislative changes to address exports and for taking steps toward ratification of the Basel Convention. EPA officials have also cited a number of benefits of ratifying the Basel Convention, such as the ability to fully participate in convention decisions on issues related to the environmentally sound management of used electronics. For example, according to EPA officials, upcoming convention decisions on guidelines for environmentally sound refurbishment and repair will impact parties’ export of used electronics for reuse, which is regarded by refurbishers as environmentally preferable to recycling but also raises concerns about the dumping of used electronics in developing countries. Basel Convention working groups on environmentally sound management are open to a range of participants that do not represent parties to the convention, including EPA, electronics manufacturers, electronics recyclers and refurbishers, and environmental groups. However, given that the United States is a signatory but not a party to the convention, the United States does not participate in the final decisions on issues such as environmentally sound management. EPA officials said they anticipate a number of such decisions in the next few years, especially regarding the transboundary movement of used and end- of-life electronics. According to EPA officials, a greater federal regulatory role over exports resulting from ratification of the Basel Convention would require an increase in EPA’s programmatic and enforcement resources, such as additional staff. The additional resources would be needed to enable the Administrator to determine whether proposed exports will be conducted in an environmentally sound manner and to implement the Basel Convention’s notice-and-consent requirement. Moreover, the European Union’s experience under the waste electrical and electronic equipment directive, which contains an obligation for waste equipment to be treated in ways that avoid environmental harm, demonstrates the need to couple the regulation of exports with enforcement efforts. A European Commission report estimated that 50 percent of waste equipment that is collected is probably not being treated in line with the directive’s objectives and requirements, and that a large volume of waste may be illegally shipped to developing countries, where it is dumped or recycled in ways that are dangerous to human health and the environment. Broad agreement exists among key stakeholders that reusing and recycling electronics in an environmentally sound manner has substantial advantages over disposing of them in landfills or exporting them to developing countries in a manner that threatens human health and the environment. There has been much debate over the best way to promote environmentally sound reuse and recycling, however, and any national approach may entail particular advantages and disadvantages for stakeholders. While empirical information about the experiences of states and other stakeholders in their efforts to manage used electronics can inform this debate, the question of a national approach revolves around policy issues, such as how to balance the need to ensure that recycling occurs nationwide as well as industry’s interests in a uniform, national approach with states’ prerogatives to tailor used electronics management toward their individual needs and preferences. In the end, these larger policy issues are matters for negotiation among the concerned parties and for decision making by Congress and the administration. At the same time, there are a number of beneficial actions that the federal government is already taking that, as currently devised, do not require the effort and implications of new legislation, but rather would complement any of the broader strategies that policymakers might ultimately endorse. In particular, EPA’s collaborative efforts—including Plug-In To eCycling, the R2 practices, EPEAT, and the Federal Electronics Challenge—have demonstrated considerable potential and, in some cases, quantifiable benefits. However, these programs’ achievements have been limited or uncertain, and EPA has not systematically analyzed the programs to determine whether their impact could be augmented. Moreover, EPA has not developed an integrated strategy that articulates how the programs, taken together, can best assist stakeholders to achieve the environmentally responsible management of used electronics. A key issue of national significance to the management of used electronics is how to address exports—an issue that, according to many stakeholders, would most appropriately be addressed at the federal level. EPA has taken useful steps by developing a legislative package for ratification of the Basel Convention, as we recommended in 2008. However, EPA has not yet worked with other agencies, including the State Department and the Council on Environmental Quality, to finalize a proposal for the administration to provide to Congress for review and consideration. While there are unresolved issues regarding the environmentally sound management of used electronics under the Basel Convention, providing Congress with a legislative package for ratification could provide a basis for further deliberation and, perhaps, resolution of such issues. We recommend that the Administrator of EPA undertake an examination of the agency’s partnership programs for the management of used electronics. The analysis should examine how the impacts of such programs can be augmented, and should culminate in an integrated strategy that articulates how the programs, taken together, can best assist stakeholders in achieving the environmentally responsible management of used electronics nationwide. In addition, we recommend that the Administrator of EPA work with other federal agencies, including the State Department and the Council on Environmental Quality, to finalize a legislative proposal that would be needed for ratification of the Basel Convention, with the aim of submitting a package for congressional consideration. We provided a draft of this report to EPA for review and comment. A letter containing EPA’s comments is reproduced in appendix III. EPA agreed with both of our recommendations and also provided additional clarifications and editorial suggestions, which we have incorporated into the report as appropriate. Regarding our recommendation for an examination of the agency’s partnership programs culminating in an integrated strategy for the management of used electronics, EPA stated that the agency plans to gather and analyze input from a variety of stakeholders and to incorporate the input into such a strategy. In addition, while pointing out that the agency’s partnership programs already reflect an integrated approach, in that they address the full life cycle of electronic products, from design through end-of-life management, EPA acknowledged that the programs can and should be augmented and stated that the agency is committed to doing so within the limits of declining resources. In particular, EPA outlined a number of potential efforts to improve the environmental attributes of electronics, increase collection and the appropriate management of used electronics, and better control exports. EPA also stated that the agency is considering the need for new legislative and regulatory authority. We acknowledge EPA’s progress in developing partnership programs to address the full life cycle of electronic products but continue to emphasize the need for a comprehensive, written strategy that addresses how the programs can best promote the environmentally sound management of used electronics. Such a document has the potential to help coordinate the efforts of the many stakeholders associated with the management of used electronics to further promote their environmentally sound reuse and recycling, and to more effectively communicate the strategy to Congress and other decision makers. Regarding our recommendation that EPA work with other federal agencies to finalize a legislative proposal needed to ratify the Basel Convention, EPA commented that the agency has already begun working with the State Department and other federal agencies to do so. EPA added that its previous work in developing such a legislative proposal should enable it to successfully complete this effort. We acknowledge this work but point out that Congress will only have the opportunity to deliberate on a tangible proposal if the effort to achieve consensus on an administration-approved position on the matter is accorded the priority needed. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To examine the Environmental Protection Agency’s (EPA) efforts to facilitate the environmentally sound management of used electronics, we reviewed solid and hazardous waste laws and regulations—including the Resource Conservation and Recovery Act and EPA’s rule on the management of cathode-ray tubes (CRT)—and their applicability to used electronics. We specifically reviewed EPA documents describing the agency’s efforts to enforce the CRT rule and to address concerns raised in our August 2008 report on electronic waste exports, including information on the number of EPA investigations of possible violations of the CRT rule. We also examined publicly available information on specific enforcement actions against companies, companies approved to export CRTs for recycling, and companies that have submitted notifications of exports for reuse, and we obtained aggregate information from EPA on its enforcement efforts. To obtain EPA’s views on its efforts, we interviewed officials from the agency’s Office of Enforcement and Compliance Assurance and the Office of Solid Waste and Emergency Response. To examine EPA’s promotion of partnership programs, we interviewed EPA officials responsible for implementing or representing the agency’s position on Plug-In To eCycling, the Responsible Recycling (R2) practices, and the Electronic Product Environmental Assessment Tool (EPEAT). In addition, we interviewed stakeholders concerned about the management of used electronics—including environmental groups; state and local government officials; and electronics manufacturers, retailers, and recyclers—to obtain their views on EPA’s efforts. To examine the views of manufacturers, retailers, recyclers, state and local governments, and other stakeholders on the state-by-state approach to the management of used electronics, we conducted a broad range of interviews. For each category of stakeholders, we conducted interviews with key national-level organizations or associations with a broad perspective on the management of used electronics across the United States and reviewed any related policy positions or reports. To gain further insights, we interviewed individual stakeholders in each category of stakeholders, including state and local government officials and other stakeholders, in five states with electronics recycling legislation that we selected for detailed review—California, Maine, Minnesota, Texas, and Washington. To supplement these detailed reviews, we interviewed state and local government officials in three states without legislation—Arizona, Florida, and New Hampshire. For each interview, we generally discussed the collection and recycling rates for used electronics, the convenience of collection opportunities to consumers, efforts to ensure environmentally sound management, and the impact of the state-by-state approach on implementation of state electronics recycling legislation and on stakeholders’ compliance or enforcement efforts. While recognizing that stakeholders may benefit from state legislation, such as through an increase in business opportunities for electronics recyclers, we specifically asked about the burden (if any) created by the state-by-state approach. For the five states with electronics recycling legislation, we reviewed the laws and related regulations, as well as other documents on the implementation and outcomes of the law, and we visited the states to conduct in-person interviews. We encountered a number of limitations in the availability of reliable data on the impact of the state-by-state approach on various stakeholders. For example, the five states we selected did not have data on collection and recycling rates prior to the effective dates of their laws, which would be useful to quantify the impact of their programs. Similarly, some manufacturers and other stakeholders regulated under state laws had concerns about providing proprietary information or did not identify compliance costs in a way that enabled us to determine the portion of costs that stems from having to comply with differing state requirements. Due to such limitations, we relied predominately on stakeholders’ statements regarding how they have been impacted under the state-by- state approach. Additional information on the stakeholders we interviewed includes the following: State and local government officials. For a national perspective, we interviewed representatives of the Association of State and Territorial Solid Waste Management Officials, the Eastern Regional Conference of the Council of State Governments, the National Conference of State Legislatures, and the National Governors Association. For the five states with electronics recycling legislation we selected for detailed review, we interviewed state legislators or legislative staff involved in enacting the laws, state environmental agency officials responsible for implementing the laws, and local solid waste management officials. We selected the five states to ensure coverage of the two basic models of state electronics recycling legislation, a recycling fee paid by consumers and producer responsibility, as well as the variations of the producer responsibility model. In addition, we selected states with recycling programs that had been in place long enough for stakeholders to provide an assessment of the impacts of the legislation. For the three states without electronics recycling legislation we selected for detailed review, we conducted telephone interviews with state and local solid waste management officials and (in Arizona and New Hampshire) legislators who have introduced legislation or been active in studying options for the management of used electronics. We selected the three states to include ones that, in part, had addressed the management of certain used electronics through other means, such as a ban on landfill disposal or grants for voluntary recycling efforts, and to ensure variety in terms of location and size. Electronics manufacturers. For a broad perspective, we interviewed representatives of two national associations of electronics manufacturers: the Consumer Electronics Association and the Information Technology Industry Council. We also interviewed representatives of a judgmental sample of nine individual manufacturers. We selected manufacturers to interview to include a range of sizes and business models, including manufacturers of information technology equipment and televisions as well as companies that no longer manufacture products covered under state laws but still bear responsibility for recycling costs in some states. In addition to these interviews, we reviewed manufacturers’ policy positions and other documents on the state-by-state approach to managing used electronics or on particular state and local electronics recycling legislation. Electronics retailers. We interviewed representatives of the Consumer Electronics Retailers Coalition, an association of consumer electronics retailers, and of a judgmental sample of four national consumer electronics retailers, including retailers that are also considered manufacturers or collectors under some state electronics recycling legislation. In each of the five states we selected for detailed review, we spoke with representatives from state retail associations, whose members include large national retailers, as well as smaller retailers operating in the five states. We also reviewed available documents pertaining to retailers’ efforts in managing used electronics and their policy positions on the state-by-state approach. Recyclers and refurbishers of used electronics. For a broad perspective from the electronics recycling industry, we interviewed a representative of the Institute of Scrap Recycling Industries, many of whose members are involved in the recycling of used electronics. In addition, for the perspective of refurbishers, we conducted an interview with TechSoup, a nonprofit organization that has established a partnership with Microsoft to increase the number of personal computers available to nonprofits, schools, and low-income families across the globe by reducing the cost of software to refurbishers. We also interviewed representatives of a judgmental sample of recyclers and refurbishers encompassing a variety of sizes and business models, including large corporations operating in multiple states as well as nonprofit organizations or smaller entities operating in a single state. In particular, in each of the five states with electronics recycling legislation we selected for detailed review, we interviewed at least one recycler operating under the state program and one refurbisher. Environmental and other nonprofit organizations. We interviewed representatives of environmental and other nonprofit organizations that have an interest in the issue of the management of used electronics, including the Basel Action Network, Consumers Union, Electronics TakeBack Coalition, Product Stewardship Institute, and Silicon Valley Toxics Coalition. In addition, in the five states with electronics recycling legislation we selected for detailed review, we interviewed representatives of state environmental organizations that advocated for the state legislation or have been active in tracking the implementation of the laws. For each of the environmental and nonprofit organizations interviewed, we reviewed available documents pertaining to their advocacy work and their views on the state-by-state approach or particular state electronics recycling legislation. To examine the implications of alternative national strategies to further promote the environmentally sound management of used electronics, we reviewed relevant existing laws relating to solid and hazardous waste management (the Resource Conservation and Recovery Act and the Mercury-Containing and Rechargeable Battery Management Act). In addition, we examined state laws establishing electronics recycling programs or addressing the management of used electronics through other means, such as a ban on landfill disposal, to identify components of the laws that might be addressed under a national approach. We also examined the European Union’s directive on waste electrical and electronic equipment and electronics recycling in Canada as examples of how used electronics are managed internationally. As part of our interviews with national-level organizations or associations of stakeholders, as well as with individual stakeholders, we discussed stakeholder efforts to coordinate state electronics recycling programs and stakeholders’ policy positions on a national strategy, including their views on the components of a national strategy, such as a mechanism for financing the cost of recycling. Regarding alternative strategies specifically relating to exports of used electronics, we examined ways that state electronics recycling programs we selected for detailed review had addressed the issue, and we interviewed stakeholders regarding current state and EPA efforts to limit potentially harmful exports. We also reviewed EPA documents and interviewed EPA officials regarding the statutory changes necessary for the United States to ratify the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, as well as the implications of ratification on the agency’s ability to exercise greater oversight over the export of used electronics for reuse or recycling. Finally, we reviewed EPA’s technical assistance comments on a congressional concept paper proposing a framework for establishing a national electronics recycling program. We conducted this performance audit from May 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The five states with electronics recycling laws that we selected for detailed review—California, Maine, Minnesota, Texas, and Washington— illustrate a range of ways of addressing elements and issues common to the management of used electronics. For each of the states, we describe three key elements we identified as establishing the framework for their recycling programs: (1) the mechanism for financing the cost of collecting and recycling used electronics, (2) the mechanism for providing for the convenient collection of used electronics, and (3) requirements for the environmentally sound management of used electronics collected under the programs and the state’s enforcement of the requirements. In addition, because the state electronics recycling programs are relatively new, we describe developments and program changes designed to address issues encountered during the initial implementation of the programs. California’s electronics recycling law established a funding mechanism to provide for the collection and recycling of certain video display devices that have a screen greater than 4 inches, measured diagonally, and that are identified by the state Department of Toxic Substances Control as a hazardous waste when discarded. According to state officials, the state’s list of covered devices currently includes computer monitors, laptop computers, portable DVD players, and most televisions. California is the only state identified as having an electronics recycling law that established a system to finance collection and recycling costs through a recycling fee paid by consumers. Effective on January 1, 2005, retailers began collecting the fee at the time of purchase of certain video display devices. The fee currently ranges from $8 to $25, depending on screen size. Retailers remit the fees to the state, and they may retain 3 percent as reimbursement for costs associated with collection of the fee. The state, in turn, uses the fees to reimburse collectors and recyclers of covered electronic devices as well as for administering and educating the public about the program. Entities must be approved by the state to be eligible to receive collection and recycling payments. There were about 600 approved collectors and 60 approved recyclers as of October 2009. To determine the amount paid per pound, the state periodically updates information concerning the net costs of collection and recycling and adjusts the statewide payment rates. To assist the state in this effort, approved collectors and recyclers are required to submit annual reports on their net collection and recycling costs for the prior year. As of May 2010, the combined statewide rate for collection and recycling was $0.39 per pound. The administration of the program is shared by three state agencies. The State Board of Equalization is responsible for collecting the fee from and auditing retailers. The Department of Resources Recycling and Recovery (CalRecycle) has overall responsibility for administering collection and recycling payments. Specific duties of CalRecycle include establishing the collection and recycling payment schedule to cover the net costs of authorized collectors and recyclers; approving applications to become an approved collector or recycler; reviewing recycling payment claims for the appropriate collection, transfer, and processing documentation and making payments; and addressing any identified fraud in payment claims. Under the law, CalRecycle is also responsible for reviewing the fee paid by consumers at least once every 2 years and adjusting the fee to ensure sufficient revenues to fund the recycling program. The third agency, the Department of Toxic Substances Control, is responsible for determining whether a video display device, when discarded or disposed of, is presumed to be a hazardous waste under the state health and safety code and, therefore, is a covered electronic device under the electronics recycling legislation. In addition, the department regulates the management of used electronics and conducts annual inspections of recyclers to ensure compliance with applicable laws and regulations. One of the purposes of the California law was to establish a program that is “cost free and convenient” for consumers to return and recycle used electronics generated in the state. To this end, the law directs the state to establish a payment schedule that covers the net cost for authorized collectors to operate a free and convenient system for collection, consolidation, and transportation. State and local government officials, as well as other state stakeholders we interviewed, told us the law has resulted in convenient collection opportunities. For example, a representative of the state’s Regional Council of Rural Counties said that, while it does not require counties to provide collection opportunities, the law had resulted in convenient collection in rural counties. Similarly, according to Sacramento County solid waste management officials, the law has made it profitable for the private sector to collect and recycle used electronics and thereby has freed up county resources to pay for media campaigns to inform the public about the law and to offer curbside collection. Recyclers approved under the state’s payment system for the recycling of covered electronic devices must be inspected at least once annually by the Department of Toxic Substances Control and be found in conformance with the department’s regulations to maintain their approval. The department’s regulations restrict certain recycling activities—such as using water, chemicals, or external heat to disassemble electronic devices—and specify requirements in a variety of other areas, including training of personnel, record-keeping, and the labeling of containers. In addition, to be eligible for a claim within the payment system, covered devices must be dismantled in California and the residuals generally must be sent to appropriate recycling facilities. Hence, the program does not pay claims for any covered devices that are exported intact. The state’s electronics recycling legislation also requires that exporters notify the department and demonstrate that the covered electronic waste or covered electronic devices are being exported for the purposes of recycling or disposal; that the importation of the waste or device is not prohibited by an applicable law in the country of destination; and that the waste or device will be managed only at facilities whose operations meet certain standards for environmentally sound management. (These demonstrations are not required for exports of a component part of a covered electronic device that is exported to an authorized collector or recycler and that is reused or recycled into a new electronic component.) According to a department official responsible for implementing the regulations, the state’s ability to withhold payment for the recycling of covered electronic devices is an effective tool for promoting compliance with the regulations. However, the official also said that the state lacks the authority to regulate exports (e.g., exports of CRT glass containing lead for processing in Mexico, which, according to the official, does not have regulations equivalent to those in California). Key developments since the initiation of California’s program in 2005 include the following adjustments to the recycling fee paid by consumers and to the payment schedule for collection and recycling: Effective January 2009, CalRecycle increased the recycling fee from an initial range of $6 to $10 to the current range of $8 to $25. As described in the CalRecycle’s January 2008 update on the program, a continued growth in the volume of recycling payment claims resulted in the pace of payments exceeding the flow of revenue generated by the fee. CalRecycle adjusted the fee to avoid exhausting the fund used to pay for the collection and recycling of used electronics. In 2008, CalRecycle decreased the payment schedule for combined collection and recycling. The initial rate was $0.48 per pound, based in part on a provisional rate established by the law, and the current rate is $0.39 per pound. According to CalRecycle officials, the initial payment schedule was artificially high, which benefited the program by fostering a recycling infrastructure in the state. CalRecycle adjusted the payment schedule on the basis of an analysis of the net cost reports submitted by collectors and recyclers. Maine’s electronics recycling program began in 2006 and finances the cost of recycling televisions, computers, computer monitors, digital picture frames, printers, and video game consoles from households. Maine’s law is based on the concept of “shared responsibility,” whereby participating municipalities generally bear the costs associated with collection and manufacturers finance handling and recycling costs associated with managing certain used electronics generated by households. Participating municipalities arrange for these used electronics to be transported to state-approved consolidators, which count and weigh information technology products by brand and manufacturer and determine the total weight of televisions and video game consoles. Consolidators who are also recyclers may then further process the used electronics; otherwise, they send the material to recycling facilities. In either case, consolidators generally invoice individual manufacturers for their handling, transportation, and recycling costs. The state approves each consolidator’s fee schedule, currently set at a maximum of $0.48 per pound for combined recovery and recycling, for use when invoicing manufacturers. For information technology products, the amount invoiced is based on the weight of the manufacturer’s own brand of electronics collected under the program (return share) plus a proportional share of products for which the manufacturer cannot be identified or is no longer in business (orphan share). In contrast, for manufacturers of televisions and video game consoles with a national market share that exceeds a certain minimum threshold, the amount invoiced is calculated as the total weight collected multiplied by the proportion of the manufacturer’s national market share of sales for those products (recycling share). Initially, Maine’s law only used return share as a basis for determining the financial responsibility of all manufacturers. The state amended the law in 2009 to base the financial responsibility of television manufacturers (as well as video game consoles) on market share. The Maine Department of Environmental Protection had recommended this change in part to address the issue of the relatively long lifespan of televisions and the concern among long-standing television manufacturers that, under the return share system, new market entrants do not bear recycling costs and can therefore offer their products at a lower price and possibly even go out of business before their products enter the waste stream. The Department of Environmental Protection has overall responsibility for the electronics recycling program. The department’s responsibilities include approving consolidators as well as the fee schedule used by consolidators in charging manufacturers, determining the orphan share for manufacturers of information technology products, and determining the recycling share for manufacturers of televisions and video game consoles on the basis of national sales data. In addition, the department is responsible for enforcing the compliance of manufacturers whose products are sold in the state. Finally, the department notifies retailers of noncompliant manufacturers (retailers are prohibited from selling products of such manufacturers). One of the purposes of Maine’s law is to establish a recycling system that is convenient and minimizes the cost to consumers of electronic products and components. In addition, manufacturers are responsible for paying the reasonable operational costs of consolidators, including the costs associated with ensuring that consolidation facilities are geographically located to conveniently serve all areas of the state as determined by the Department of Environmental Protection. To establish convenient collection opportunities for households, Maine’s program relies on the state’s existing municipal waste collection infrastructure and provides an incentive to municipalities to participate by giving them access to essentially free recycling of certain covered electronics. The law allows participating municipalities to collect used electronics at a local or regional waste transfer station or recycling facility or through other means, such as curbside pickup. According to a 2007 survey supported by the department, most municipalities provide permanent collection sites. About half of the municipalities that responded to the survey reported that they charge end-of-life fees for accepting used electronics from households to offset the costs associated with collection. However, local solid waste management officials we interviewed also told us that the program implemented under the law enabled municipalities to reduce or eliminate fees. For example, the Portland solid waste manager said that the program enabled the city to stop charging residents a fee, which was approximately $20 per television or computer monitor prior to the law. Notably, Maine law now prohibits the disposal of CRTs in landfills and other solid waste disposal facilities. Maine’s law requires that recyclers provide to consolidators a sworn certification that they meet guidelines for environmentally sound management published by the Department of Environmental Protection. Among other things, the guidelines stipulate that recyclers comply with federal, state, and local laws and regulations relevant to the handling, processing, refurbishment, and recycling of used electronics; implement training and other measures to safeguard occupational and environmental health and safety; and comply with federal and international law and agreements regarding the export of used products or materials. Other guidelines specific to exports include a requirement that televisions and computer monitors destined for reuse include only whole products that have been tested and certified as being in working order or as requiring only minor repair, and where the recipient has verified a market for the sale or donation of the equipment. The Department of Environmental Protection official in charge of the program told us she has visited the facilities that recycle used electronics collected under Maine’s program, but that the department lacks the resources and auditing expertise to ensure adherence to the guidelines as well as the authority to audit out-of-state recyclers. Since Maine initiated its electronics recycling program, the state made a number of changes to the law, and the Department of Environmental Protection has suggested additional changes. Such changes include the following: Scope of covered electronic devices. In 2009, Maine added several products, including digital picture frames and printers, to the scope of covered devices. In its 2008 report on the recycling program, the Department of Environmental Protection had recommended adding digital picture frames and printers for a number of reasons, including the growing volume of such equipment in the waste stream. In its 2010 report, the department also recommended the program be expanded to include used electronics generated by small businesses, thereby increasing the volume of used electronics collected, providing for more efficient transportation from collection sites, and providing for a greater volume to recyclers as a means to drive down the per-pound cost of recycling. Program administration. Beginning in July 2010, manufacturers of covered devices sold in the state are required to pay an annual registration fee of $3,000 to offset the state’s administrative costs associated with the program. In its January 2010 report, the Department of Environmental Protection recommended that the state legislature consider eliminating or reducing the fee for certain manufacturers, such as small television manufacturers. According to the report, an exemption from paying the fee would provide relief to manufacturers that no longer sell or have not sold significant quantities of covered devices in the state. Recycling costs. In its January 2010 report, the Department of Environmental Protection noted that, while direct comparisons between differing state programs are difficult, recycling costs are higher in Maine than in other states with electronics recycling laws. Representatives of both the Consumer Electronics Association and the Information Technology Industry Council also told us that recycling costs in Maine are higher because the state selects consolidators and approves the fee schedule used by each of the consolidators to invoice manufacturers, thereby limiting competition. To address such concerns, the department stated its intent to take a number of administrative actions. For example, the department plans to streamline the permitting process for facilities that process used electronics and thereby encourage the growth of recycling facilities in the state and reduce the handling and shipping costs for used electronics, much of which is currently processed out of state. The department also plans to examine ways to increase the competitiveness of the cost approval process for consolidators or price limits that can be imposed without compromising the level of service currently afforded to municipalities. Minnesota initiated its program in 2007 to finance the recycling of certain used electronics from households. Manufacturers of video display devices (televisions, computer monitors, and laptop computers) with a screen size that is greater than 9 inches, measured diagonally, that are sold in the state are responsible for recycling, including costs, and can also meet their obligations by financing the recycling of printers, keyboards, DVD players, and certain other electronics. Minnesota’s law establishes recycling targets for manufacturers selling video display devices in the state. The targets are set at an amount of used electronics equal to 80 percent of the weight of video display devices sold to households during the year. (The target was 60 percent for the first program year.) Manufacturers that exceed their targets earn recycling credits that can be used to meet their targets in subsequent years or sold to other manufacturers. Conversely, manufacturers that fail to meet their targets pay recycling fees on the basis of how close they are toward meeting their obligation. State officials told us the recycling program is based primarily on market economics and does not require significant government involvement. In particular, the state does not set the prices paid for recycling, and manufacturers have flexibility in selecting collectors and recyclers to work with. Recyclers seek to be reimbursed for their costs by marketing and selling recycling pounds to manufacturers. According to several stakeholders we interviewed about the state’s program, this market-based approach has contributed to lowering recycling costs in the state. The Minnesota Pollution Control Agency has primary responsibility for administering the program. The agency’s responsibilities include reviewing registrations submitted by manufacturers for completeness; maintaining registrations submitted by collectors and recyclers; and conducting educational outreach efforts regarding the program. The state department of revenue reviews manufacturers’ annual registration fees and reports and, among other things, collects data needed to support manufacturers’ fee determinations. The state uses registration fees to cover the cost of implementing the program, which may include awarding grants to entities that provide collection and recycling services. The Minnesota Pollution Control Agency has requested proposals to provide grants for collection and recycling outside of the Minneapolis-St. Paul metropolitan area and expects to award several grants in 2010. Minnesota’s law does not stipulate criteria for the establishment of a statewide collection infrastructure or mandate that any entity serve as a collector, but rather relies on the reimbursement from manufacturers to create an incentive for the establishment of collection opportunities. To foster the availability of collection opportunities outside of the Minneapolis-St. Paul metropolitan area, the law allows 1½ times the weight of covered electronic devices collected outside of the metropolitan area to count toward manufacturers’ recycling targets. Local solid waste management officials we interviewed described the impact of the state’s electronics recycling legislation on the convenience of collection opportunities as dependent upon whether a county already had an established recycling program for used electronics, with a greater impact in counties that did not already have recycling programs. Minnesota’s law prohibits the commercial use of prison labor to recycle video display devices and requires that recyclers abide by relevant federal, state, and local regulations and carry liability insurance for environmental releases, accidents, and other emergencies. The law does not establish additional requirements for environmentally sound management. In addition, Minnesota Pollution Control Agency officials said that they have limited resources to ensure that used electronics are managed responsibly, particularly when equipment is shipped out of state, and that enforcement efforts are largely based on self-policing by recyclers and spot checks of larger recyclers. Two recyclers in the state with whom we spoke said that a lack of oversight of recyclers by state authorities had contributed to undercutting by irresponsible recyclers. Minnesota Pollution Control Agency officials said they are seeking to promote certification programs, such as R2 or e-Stewards®, for electronics recyclers operating in the state. Minnesota amended its law in 2009 to make the following changes: The state amended the law to remove the requirement that retailers annually report to each video display device manufacturer the number of the manufacturer’s brand of video display devices sold to households during the previous year. Manufacturers submitted this information to the state, which used it to determine manufacturers’ recycling targets. A representative of the Minnesota Retailers Association said that retailers found this requirement to be a burden. Similarly, according to the Consumer Electronics Retailers Coalition, the state’s reporting requirement imposed a high cost on retailers and increased the risk of the disclosure of proprietary sales data. Minnesota now uses either manufacturer-provided data or national sales data, prorated to the state’s population, to determine manufacturers’ obligations. The state further amended the law to limit the use of recycling credits. Minnesota Pollution Control Agency officials told us this amendment was intended to address a “boom and bust” scenario, whereby manufacturers financed the recycling of large amounts of used electronics in the first program year and accumulated carry-over credits, which they used to meet their recycling targets during the second year. The use of credits left local governments and electronics recyclers responsible for the cost of collecting and recycling used electronics that exceeded manufacturers’ recycling targets. As a result, according to local solid waste management officials we interviewed, some counties reintroduced end-of-life fees and saw an increase in the illegal dumping of used electronics. To address such issues and ensure that a majority of targets are met by the recycling of newly collected material, the amended law limits the portion of a manufacturer’s target that can be met through carry-over credits to 25 percent. Prior to the amendment, the law did not limit the use of recycling credits. Since the implementation of Minnesota’s program, several other states, including Illinois and Wisconsin, have incorporated the use of recycling targets into electronics recycling legislation. Several stakeholders told us they prefer targets as they are designed in the Illinois program. For example, a representative of one electronics manufacturer said he expects that manufacturers will have difficulty in meeting their targets in Minnesota in upcoming years after recyclers have worked through the backlog of used electronics stored in consumers’ homes prior to implementation of the state’s law. In contrast, under the Illinois program, manufacturers’ targets are based in part on the total amount recycled or reused during the prior year, such that the targets may be adjusted downward if the amounts collected decrease. Similarly, several refurbishers of used electronics pointed out that Minnesota’s law does not allow the refurbishment of covered electronic devices to count toward manufacturers’ recycling targets and thereby, according to some stakeholders, may create an incentive to recycle equipment that has been collected but is in working condition or can be refurbished. In contrast, under Illinois’ law, the weight of covered electronic devices processed for reuse is doubled when determining whether a manufacturer has met its recycling and reuse target, and the weight is tripled if the refurbished equipment is donated to a public school or nonprofit entity. Texas’ computer equipment recycling program began in 2008 and requires manufacturers to provide opportunities for free collection of desktop and laptop computers, monitors not containing a tuner, and accompanying mice and keyboards from consumers in the state. Consumers are defined as individuals who use computer equipment purchased primarily for personal or home-business use. Texas’ computer equipment recycling law is based on the concept of “individual producer responsibility,” whereby manufacturers of computer equipment are responsible for implementing a recovery plan for collecting their own brand of used equipment from consumers. The state’s program requires that each manufacturer submit its plan to the state and annually report the weight of computer equipment collected, recycled, and reused. The law does not authorize manufacturer registration fees, and manufacturers are free to select the recyclers with whom they work and negotiate recycling rates to be paid. The Texas Commission on Environmental Quality has the primary responsibility for enforcing the law. The commission’s responsibilities include providing information on the Internet about manufacturers’ recovery plans; educating consumers regarding the collection, recycling, and reuse of computer equipment; helping to ensure that electronics retailers do not sell the equipment of manufacturers without recovery plans; and annually compiling information submitted by manufacturers and issuing a report to the state legislature. According to commission officials, manufacturers not paying registration fees has not caused a financial burden because the commission already had the expertise and outreach capabilities needed to implement the law. The Texas law requires that the collection of computer equipment be reasonably convenient and available to consumers in the state. In addition, manufacturers’ recovery plans must enable consumers to recycle computer equipment without paying a separate fee at the time of recycling. The law allows manufacturers to fulfill these requirements by offering a system for returning computer equipment by mail, establishing a physical collection site, or organizing a collection event or by offering some combination of these or other options. According to Texas Commission on Environmental Quality officials, most manufacturers have opted to offer a mail-back program, and one manufacturer noted that the mail-back programs may be more convenient for rural residents of the state than a physical collection point. Some manufacturers have provided additional collection options. For example, in addition to providing a mail-back option, Dell has partnered with affiliates of Goodwill Industries in the state to establish a physical collection infrastructure. The local solid waste management officials we interviewed regarding the state’s computer equipment recycling law were critical of the impact of the law on providing collection opportunities and relieving local governments of the burden of managing used electronics. These officials attributed the law’s lack of impact to a number of factors, including the inconvenience to consumers of manufacturers’ mail-back programs; insufficient education of consumers about recycling opportunities by manufacturers, the Texas Commission on Environmental Quality, or local governments; and manufacturers having responsibility only for the cost of recycling computer equipment collected directly from consumers, not for that collected by local governments (e.g., when consumers may be unaware of the opportunities for free recycling). As a result, while they are not required to collect used computer equipment, local governments bear the costs for the equipment they collect. For example, the solid waste coordinator for one regional council of governments said that the council continues to provide grants to local governments for the management of used electronics. The Texas electronics recycling law requires that computer equipment collected under the law be recycled or reused in a manner that complies with federal, state, and local law. In addition, the law directed the Texas Commission on Environmental Quality to adopt standards for the management of used electronics developed by the Institute for Scrap Recycling Industries, which represents electronics recyclers, or to adopt such standards from a comparable organization. Among other things, the standards adopted by the commission require that recyclers prioritize refurbishment over recycling and recycling over disposal, ensure that computer equipment is stored and processed in a manner that minimizes the potential release of any hazardous substance into the environment, and have a written plan for responding to and reporting pollutant releases. Manufacturers are required to certify that recyclers have followed the standards in recycling the manufacturers’ computer equipment. Texas Commission on Environmental Quality officials said that, under the commission’s risk-based approach to enforcement of environmental regulations, they had not prioritized regular, scheduled enforcement of the requirements for the environmentally sound management of used computer equipment collected under the state’s program. They said that they would follow up on any allegations of noncompliance with the requirements, but that they had not received any such complaints. Several recyclers in the state confirmed that there had been minimal oversight of recyclers by the commission and said that manufacturers play a more active role than the commission in ensuring that the recyclers with whom they contract adhere to requirements for environmentally sound management. In 2009, the Texas state legislature passed a bill that would have required that television manufacturers collect and recycle an amount of televisions on the basis of manufacturers’ market share of equipment sold in the state. However, the bill was vetoed by the governor, who stated that it was significantly different than the law covering computer equipment—for example, in that the bill would impose fees on television manufacturers and recyclers. Local solid waste management officials we interviewed, as well as a state environmental group that focuses on used electronics, were critical of the governor’s veto. For example, according to the environmental group, the bill would have relieved local governments of the costs associated with managing used televisions, and without a law establishing a recycling program, televisions will continue to be disposed of in landfills, which is not prohibited in Texas. Washington’s electronics recycling law was passed in 2006, and the program began full operation in 2009. The program covers the costs associated with collecting, transporting, and processing desktop and laptop computers, computer monitors, and televisions generated by households, charities, school districts, small businesses with fewer than 50 employees, and small governments (cities with a population of fewer than 50,000, counties with a population fewer than 125,000, and special purpose districts). Under Washington’s law, manufacturers are required to finance the collection, transportation, and recycling of certain used electronics. The law allows manufacturers to meet this requirement by implementing an independent, state-approved collection and recycling plan or by participating in the default “standard plan.” In addition, the law requires that individual manufacturers register with the Department of Ecology, the state agency responsible for administering the law, and pay a fee to cover the department’s administrative costs. The fees are based on a sliding scale linked to a manufacturer’s annual sales of covered electronic products in the state. The specific responsibilities of the department include reviewing the standard plan as well as any independent plans submitted by manufacturers for the department’s approval; establishing an annual process for local governments and local communities to report their satisfaction with the services provided by the plans; registering manufacturers, collectors, transporters, and processors for the program; and enforcing the law (e.g., by issuing warnings and penalties against manufacturers selling covered products in the state if they are not participating in an approved plan). The standard plan is implemented by the Washington Materials Management and Financing Authority, a public body created by the state’s law. All manufacturers are required to be members of the authority and the standard plan, or they can opt out of the standard plan by gaining the state’s approval for their own independent plan. Currently, all manufacturers affected by the state’s law meet their requirements through participation in the standard plan. The Washington Materials Management and Financing Authority assesses individual manufacturers for collection and recycling costs, as well as the authority’s administrative costs, on the basis of a combination of market share and return share, with the return share being based on an annual sampling of used electronics collected under the state’s program. The authority uses the assessments paid by manufacturers to reimburse individual collectors, transporters, and recyclers at rates negotiated with the authority. According to the director of the authority, the combined rate for the collection, transportation, and recycling of used electronics, as well as administrative costs, was $0.24 per pound in 2009. A number of stakeholders noted that the authority has the ability to negotiate relatively low prices, in comparison with some other state electronics recycling programs, due to the authority’s purchasing power over electronics recycling services in the state. Washington’s electronics recycling law includes a number of specific requirements for the establishment of a convenient collection network throughout the state, in both urban and rural areas. In particular, the law requires that each plan provide collection service in every county and every city or town with a population greater than 10,000. Collection sites may include electronics recyclers and repair shops, recyclers of other commodities, reuse organizations, charities, retailers, government recycling sites, or other locations. Plans may limit the number of used electronics accepted per customer per day or per delivery at a collection site or service but are also required to provide free processing of large quantities of used electronics generated by small businesses, small governments, charities, and school districts. Local solid waste management officials told us the law has had a positive impact on promoting the collection of used electronics in the state. One of these officials also said that the law’s implementation has eliminated the cost burden on local government for managing used electronics. In contrast, representatives of several manufacturers, as well as the Consumer Electronics Association, told us that the law’s requirements for convenience are too prescriptive and have served as an impediment for manufacturers to obtain approval for their independent plans. Along these lines, in 2009, the Department of Ecology rejected two independent plans submitted by manufacturers because the department concluded that the plans did not meet the law’s convenience criteria. Department officials told us they expect the plans to be resubmitted and approved once the manufacturers submitting the plans demonstrated that they would be able to meet the convenience criteria. The Department of Ecology established both minimum standards and voluntary “preferred” standards for the environmentally sound management of used electronics. Among other things, the minimum standards require that recyclers implement an environmental, health, and safety management system; remove any parts that contain materials of concern, such as devices containing mercury, prior to mechanical or thermal processing and handle them in a manner consistent with the regulatory requirements that apply to the items; and not use prison labor for the recycling of used electronics. The department encourages recyclers to conform to the preferred standards and identifies recyclers that do so on its Web site. In addition, the Washington Materials Management and Financing Authority made the preferred standards a requirement for all recyclers with whom the authority contracts under the standard plan. Among other things, the preferred standards stipulate that recyclers use only downstream vendors that adhere to both the minimum and voluntary standards with respect to materials of concern; ensure that recipient countries legally accept exports of materials of concern; and, as with the minimum standards, undergo an annual audit of the recycler’s conformance with the standards. Department of Ecology officials said that the authority’s requirement that recyclers achieve preferred status had enabled the authority to achieve more than what the state could legally require, particularly regarding exports. Washington amended its law in 2009 to authorize collectors in receipt of fully functioning computers to sell or donate them as whole products for reuse. The amendment requires that collectors not include computers gleaned for reuse when seeking compensation under a standard or independent plan. In addition, when taking parts from computers submitted for compensation (i.e., for recycling) to repair other computers for reuse, collectors must make a part-for-part exchange with the nonfunctioning computers submitted for compensation. According to Department of Ecology officials, the provisions pertaining to reuse in both the department’s original regulations and the amendment are intended to prevent collectors from stripping valuable components from used electronics for export to markets with poor environmental standards, and sending only the scrap with no value to the recyclers used by a standard or independent plan. Similarly, a Washington refurbisher told us that the requirement for a part-for-part exchange when repairing equipment is intended to address the concern that collectors might export valuable components pulled out of equipment and receive a higher rate of compensation than by submitting the equipment to a recycler. According to the refurbisher, the amendment has improved the impact of Washington’s law on the ability to refurbish and reuse equipment but has also resulted in unnecessary work to reinstall components into equipment sent for recycling. In addition to the contact named above, Steve Elstein, Assistant Director; Elizabeth Beardsley; Mark Braza; Joseph Cook; Edward Leslie; Nelson Olhero; Alison O’Neill; and Tim Persons, Chief Scientist, made key contributions to this report.
Low recycling rates for used televisions, computers, and other electronics result in the loss of valuable resources, and electronic waste exports risk harming human health and the environment in countries that lack safe recycling and disposal capacity. The Environmental Protection Agency (EPA) regulates the management of used electronics that qualify as hazardous waste and promotes voluntary efforts among electronics manufacturers, recyclers, and other stakeholders. However, in the absence of a comprehensive national approach, a growing number of states have enacted electronics recycling laws, raising concerns about a patchwork of state requirements. In this context, GAO examined (1) EPA's efforts to facilitate environmentally sound used electronics management, (2) the views of various stakeholders on the state-by-state approach, and (3) considerations to further promote environmentally sound management. GAO reviewed EPA documents, interviewed EPA officials, and interviewed stakeholders in five states with electronics recycling legislation. EPA's efforts to facilitate the environmentally sound management of used electronics consist largely of (1) enforcing its rule for the recycling and exporting of cathode-ray tubes (CRT), which contain significant quantities of lead, and (2) an array of partnership programs that encourage voluntary efforts among manufacturers and other stakeholders. EPA has improved enforcement of export provisions of its CRT rule, but issues related to exports remain. In particular, EPA does not specifically regulate the export of many other electronic devices, such as cell phones, which typically are not within the regulatory definition of hazardous waste despite containing some toxic substances. In addition, the impact of EPA's partnership programs is limited or uncertain, and EPA has not systematically analyzed the programs to determine how their impact could be augmented. The views of stakeholders on the state-by-state approach to managing used electronics have been shaped by the increasing number of states with electronics recycling legislation. To varying degrees, the entities typically regulated under the state laws--electronics manufacturers, retailers, and recyclers--consider the increasing number of state laws to be a compliance burden. In contrast, in the five states GAO visited, state and local solid waste management officials expressed overall support for states taking a lead role in the absence of a national approach. The officials attributed their varying levels of satisfaction more to the design and implementation of individual state recycling programs, rather than to the state-by-state approach. Options to further promote the environmentally sound management of used electronics involve a number of policy considerations and encompass many variations, which generally range from a continued reliance on state recycling programs to the establishment of federal standards via legislation. The first approach provides the greatest degree of flexibility to states, but does not address stakeholder concerns that the state-by-state approach is a compliance burden or will leave some states without electronics recycling programs. Moreover, EPA does not have a plan for coordinating its efforts with state recycling programs or articulating how EPA's partnership programs can best assist stakeholders to achieve the environmentally sound management of used electronics. Under the second approach, a primary policy issue is the degree to which federal standards would allow for stricter state standards, thereby providing states with flexibility but also potentially worsening the compliance burden from the standpoint of regulated entities. As a component of any approach, a greater federal regulatory role over exports could address limitations on the authority of states to regulate exports. GAO previously recommended that EPA submit to Congress a legislative proposal for ratification of the Basel Convention, a multilateral environmental agreement that aims to protect against the adverse effects resulting from transboundary movements of hazardous waste. EPA officials told GAO that the agency had developed a legislative proposal under previous administrations but had not finalized a proposal with other federal agencies. GAO recommends that the Administrator, EPA, (1) examine how EPA's partnership programs could be improved to contribute more effectively to used electronics management and (2) work with other federal agencies to finalize a legislative proposal on ratification of the Basel Convention for congressional consideration. EPA agreed with the recommendations.
Federal budget accounts are a product of the needs and goals of many users and reflect the many roles which they have been asked to address. The present budget account “structure” was not created as a single integrated framework but rather developed, for the most part, as separate budget accounts over time to respond to specific needs. Viewing these individually developed accounts collectively discloses not only the variety within the current structure but also its complexity. Our review of fiscal year 1995 budget accounts revealed a structure characterized by a concentration of budgetary resources in a few large accounts and a scattering of remaining resources among hundreds of other accounts; a mix of account orientations with an emphasis on programs and processes, rather than objects of expense or organizations; over 70 percent of total budgetary resources available in fiscal year 1995 from sources which did not require congressional approval in the current year; and extensive use of general funds to provide most budgetary resources to most accounts, but with special and trust funds supporting about 30 percent of total resources and 20 percent of all accounts. These observations vary significantly among federal missions, federal organizations, and appropriations subcommittees and help to illustrate the intricate network of relationships within the budget account structure. As a result, cross-cutting initiatives that affect budget accounts will encounter in hundreds of accounts a fundamentally heterogeneous structure that serves many different needs and objectives. Nearly 50 years ago, the Hoover Commission examined the federal budget account structure and concluded, “The present appropriation structure underlying the budget is a patchwork affair evolved over a great many years and following no rational pattern.” “an item for which appropriations are made in any appropriation Act and, for items not provided for in appropriation Acts, ...means an item for which there is a designated budget account identification code number in the President’s budget.” We began our analysis from this definition of an account. However, within federal budgeting and financial management, operational definitions for the term “account” vary depending on the user and the purpose to be served. For example, congressional appropriators establish budget accounts to facilitate congressional allocation and oversight responsibilities. The President’s budget presentation generally reflects this structure but may consolidate separate items into a single account.Agency officials use these budget account structures to report to the Congress and the Office of Management and Budget (OMB), but they often rely on more detailed account structures—such as standard general ledger accounts which integrate proprietary and budgetary accounting, internal budgetary allotment schedules, or project and activity plans maintained by program managers—to monitor expenditures and performance and for other management needs. Once defined, accounts may be quantified in various ways depending on the user and the purpose to be served. A wide variety of budgetary information and subsidiary classifications are available to meet the many needs of different users. For example, users interested in relative priorities within the annual budget process might concentrate on budget authority; those interested in the approaches used by government to address its needs might look to obligations; and those interested in deficits and how much the government ultimately spends might emphasize outlays. Each perspective would produce a different, but equally valid, universe of budget accounts. For this report, we have used budgetary resources as reported in the President’s budget presentation to define and measure the universe of accounts. Budgetary resources are equivalent to all available budget authority—appropriations, borrowing and contract authority, reappropriations, and offsetting collections from the public and other federal organizations, net of transferred authority and statutory limitations. This approach captures a very large universe by including all accounts with budgetary resources available for obligation, but it can be confusing when compared to the outlays occurring in a given fiscal year. Because budgetary resources include current and permanent authority as well as resources available from offsetting collections and from prior years, they may vary significantly from outlays. This was the case for the fiscal year 1995 estimate, as reported in the President’s fiscal year 1996 budget, which projected budgetary resources of $2.5 trillion and net outlays of $1.5 trillion. Lastly, it might be inferred that discussing budget accounts as a structure, rather than as separate and independent decisions as indicated by the 1985 act quoted above, suggests that there is or should be a set of coherent rules and criteria. This is not our intention. In this report, we examine budget accounts collectively for two reasons. First, the concept of a budget account structure is accepted among budget practitioners and academics and allows for succinct references to an ever-changing and complex environment. Second, analyses which describe the budget account structure in terms of the characteristics and patterns of its constituent parts—the separate accounts—can provide rich insights into the federal budget process and are necessary to the consideration of cross-cutting proposals. For example, the following recent congressional actions and administration initiatives call for or suggest certain cross-cutting changes to budget accounts. The Government Performance and Results Act (GPRA) of 1993 was enacted to enhance program management, public accountability, and congressional decision-making by establishing a process to set strategic and annual program goals and to measure accomplishments. By the fall of 1997, executive organizations are required to submit to OMB an annual performance plan which establishes a target level of performance for each project or activity listed in the “program by activities” section of each budget account presentation. Beginning with the February 1998 submission of the fiscal year 1999 budget, the President is required to transmit to the Congress a “Federal Government performance plan for the overall budget.” The National Performance Review (NPR), under the leadership of the Vice President, is an executive branch management reform effort intended to make the government “work better and cost less.” Among hundreds of NPR recommendations, generally intended to emphasize results and enhance managerial flexibility, were several dealing with “mission-driven, results-oriented budgeting.” Some of the most significant recommendations concerning the budget account structure were proposals to (1) restructure budget accounts to reduce over-itemization and to align them with programs, (2) budget and manage on the basis of operating costs, and (3) identify accounts that should be converted to multi-year or no-year status. The Federal Accounting Standards Advisory Board (FASAB) was created to consider and recommend accounting principles for the federal government. Recently, FASAB has proposed cost accounting standards,which (1) focus on “responsibility segments,” defined as components associated with a specific mission, conducting a major activity, or producing one or more related products and services and (2) capture, for responsibility segments, “full costs,” defined as the costs of resources consumed directly or indirectly plus the costs of identifiable supporting services. Restructuring budget accounts to align with programs and outputs could be one outgrowth from budgetary and financial accounting that tracks entitywide expenditures and expenses. A persistent pattern permeating the budget account structure is the unequal distribution of budgetary resources across accounts. When the number and size of accounts are compared, an inverse relationship is revealed. Figure 1 displays this “bookend” relationship. As figure 1 shows, nearly 80 percent of the federal government’s resources are clustered in less than 5 percent of budget accounts (47 out of 1,303). Conversely, 85 percent of all budget accounts contain about 6 percent of the federal government’s total budgetary resources ($138.8 billion out of total budgetary resources of $2.5 trillion). Collectively in fiscal year 1995, there are 199 accounts with total budgetary resources over $1 billion and 162 accounts with budgetary resources of less than $1 million. Table 4 on page 17 more fully depicts the largest and smallest accounts. A similar pattern of inequality emerges when comparing accounts and budgetary resources across federal missions, organizations, and appropriations subcommittees. The three missions with the fewest number of accounts—social security, net interest, and medicare—are among the largest in terms of available 1995 budgetary resources. Consistent with the generally inverse relationship between accounts and budgetary resources, the missions with the most accounts—general government and natural resources and environment—have collectively only about 4 percent of total budgetary resources. The Departments of Commerce and of Health and Human Services (HHS) have the same number of accounts but vastly different amounts of budgetary resources—$6.7 billion and $383.8 billion, respectively. Conversely, although the Departments of Defense and of the Treasury have comparable resource levels—$444.1 billion and $418.7 billion, respectively—Defense has more than twice the number of accounts. Two appropriations subcommittees—Interior and Labor, Health and Human Services, Education, and Related Agencies—appropriate to similar numbers of accounts but provide widely different resource levels ($15.0 billion and $260.6 billion, respectively). The subcommittee concerned with the Departments of Commerce, Justice, and State has the most accounts—almost 17 percent of the accounts which the appropriations subcommittees acted on in 1995—but appropriated only about 4 percent of 1995 budgetary resources. In establishing a budget account, the Congress articulates its interests, which in turn define the account’s orientation. The current budget account structure displays a mix of orientations, reflecting both the 200 years of federal budget development and varying congressional interests. We reviewed the fiscal year 1995 accounts of three judgmentally selected organizations—the Departments of Energy (DOE), HHS, and Treasury—and found that these accounts appear to emphasize program and process orientations to a greater extent than objects and organizations. Within these three organizations, program and process accounts represented 60 percent of accounts and 55 percent of total budgetary resources. Each of the four orientations used in this report—object, organization, process, and program—reflects a specific focus or interest of the Congress. An object orientation emphasizes the items of expense, while an organization orientation focuses on the responsible governmental unit. In effect, the former stresses control of spending on an item-by-item basis; the latter accentuates accountability. Accounts with a process orientation concentrate on the specific operations or approaches underlying federal activities, while those with a program orientation focus on the missions and objectives of governmental units. Object, organization, process, and program orientations are found throughout the federal budget account structure. Each account generally will have a predominant orientation but may have characteristics of other orientations. Thus, assigning an account to a specific orientation reflects a judgment based on interpretations of an account’s statutory language and obligation patterns. Table 1 presents examples of account orientations from DOE, HHS, and Treasury. Table 2 presents the results of our assessments of the account orientations for DOE, HHS, and Treasury, showing the percentage of accounts and budgetary resources for each orientation. Table 2 shows that more than half of the accounts in each of the three organizations have a program or process orientation. Fifty percent or more of DOE’s and HHS’ accounts had a program orientation, and these accounts held most of their budgetary resources. A total of 42 percent of Treasury’s accounts were oriented to process, but 82 percent of its resources were in object accounts, principally due to its large Interest on the Public Debt account. Excluding this account changes the distribution of resources to 10 percent object, 1 percent organization, 60 percent process, and 28 percent program. Accounts with a program orientation do not necessarily capture all related program costs. For example, the costs of providing Medicare are spread among at least three accounts with different orientations: Federal Hospital Insurance Trust Fund (program orientation), Federal Supplementary Medical Insurance Trust Fund (program orientation), and Program Management (object orientation). Other programs separate accounts for salaries and expenses from other program expenditure accounts. Conversely, some accounts include the costs of a number of programs and activities within a single account. For example, the HHS’ Children and Families Services Programs account includes Head Start and many other social service and community services programs, while DOE’s Economic Regulation account captures the costs of both the Economic Regulatory Administration and the Office of Hearings and Appeals. One of the more informative ways to characterize an account is by its resource and fund types. These closely related dimensions illustrate the extent to which an account relies on current year authority, as opposed to other permanent or available types of funding, and the degree of earmarking or restriction associated with receipts available to an account. Resource type indicates when and how an account received resources available for a particular fiscal year. We identified and analyzed four resource types: resources received in prior years, current authority, permanent authority, and offsetting collections. Fund type refers to the extent of designation or restriction of the receipts associated with an account. We analyzed five fund types: general funds, intragovernmental revolving funds, public enterprise funds, special funds, and trust funds.Both resource and fund types are defined in the glossary in appendix I. Table 3 summarizes these characteristics in terms of the number of accounts and amount of budgetary resources for fiscal year 1995. As shown in table 3, permanent authority, the resource type associated with the fewest accounts, provides nearly half of the budgetary resources available in fiscal year 1995. Conversely, prior year authority, the most common resource type among accounts, provides the fewest budgetary resources (12.5 percent). Over one-third of fiscal year 1995 budget accounts have access to offsetting collections. Combining offsetting collections with prior year funding and permanent authority means that approximately 70 percent of total budgetary resources (about $1.76 trillion out of $2.47 trillion) were available for obligation without further action by the Congress in fiscal year 1995. Within fund types, general funds comprise about two-thirds of all accounts and about 55 percent of available budgetary resources. About 22 percent of accounts and 31 percent of available resources involve designated or restricted receipts in trust and special funds. A fund type generally aligns with a specific resource type. Current authority provides more than half of the resources in general fund accounts. Permanent authority provides over 80 percent of the resources to trust fund accounts and almost 50 percent of resources to special fund accounts. Offsetting collections provide 89 percent of the funding to intragovernmental revolving funds and 54 percent of the resources to public enterprise funds. (See appendix II, figure II.1 on page 34.) Again, interesting and variable patterns emerge when resource type and fund type are applied to federal missions, organizations, and appropriations subcommittees. The figures in appendix II present detailed information on these patterns. The following are some of the observations that can be drawn from that information. The federal missions used in this report correspond to the 18 OMB budget function classifications described in appendix I. Except for net interest and medicare, the missions have all types of available budgetary resources. (See appendix II, figure II.2 on page 36.) Permanent authority provides nearly all the budgetary resources for net interest and social security and is the dominant resource type for medicare (83 percent). However, this resource type provides less than 10 percent of available budgetary resources for 8 other missions. Seven missions (general science, space, and technology; education, training, employment, and social services; administration of justice; veterans benefits and services; natural resources and environment; national defense; and health) received more than 50 percent of their available resources from current authority, while 6 missions had less than 20 percent of their available resources in current authority. International affairs and community and regional development have the greatest share of their resources provided through prior year funding (63 percent and 37 percent respectively), but more than half of the mission areas receive less than 13 percent of their budgetary resources from this source. Offsetting collections are significant only to commerce and housing credit (62 percent), energy (55 percent), and agriculture (43 percent). While virtually all missions have general fund and trust fund accounts, more than three-fourths have special, public enterprise, or intragovernmental fund types. (See appendix II, figure II.5 on page 44.) Twelve of the 18 missions receive 50 percent or more of available resources from the general fund. Three missions—net interest; general science, space, and technology; and education, training, employment, and social services—are virtually fully funded through the general fund. Conversely, 2 missions—social security and commerce and housing credit—receive less than 5 percent of their available resources from the general fund. Trust funds provide the dominant share of budgetary resources for 3 missions—social security (99 percent), medicare (81 percent) and transportation (71 percent)—but represent 1 percent or less of budgetary resources in 8 other missions. Public enterprise funds are the most significant source of budgetary resources for 3 missions—commerce and housing credit (95 percent of budgetary resources in 30 percent of accounts), agriculture (80 percent of budgetary resources in 19 percent of accounts), and energy (55 percent of resources in 15 percent of accounts). However, public enterprise funds provide less than 5 percent of the budgetary resources for 12 other missions. Only the general government mission has a significant amount of its budgetary resources (41 percent) provided by intragovernmental revolving funds; 13 missions receive less than 1 percent of their resources from these funds. Only 1 mission—natural resources and environment—has a sizeable number of special fund accounts (24 percent), but these accounts amount to only 6 percent of its budgetary resources. For this report, we considered all federal government entities that received budgetary resources in fiscal year 1995. For presentation purposes, the federal organizations shown in our analyses include (1) all departments and agencies separately displayed in the President’s budget, (2) the legislative and judicial branches, (3) the Executive Office of, and Funds Appropriated to, the President, and (4) the following independent agencies: the Export-Import Bank of the United States, the Federal Deposit Insurance Corporation, the Federal Emergency Management Agency, the National Science Foundation, the Postal Service, the Railroad Retirement Board, the Resolution Trust Corporation, the Smithsonian Institution, and the United States Information Agency. In our analyses, major organizations include all departments and the Environmental Protection Agency. (See appendix II, figure II.6 on page 46 and appendix III, figure III.6 on page 57 for federal organizations shown in our analyses and appendix IV for a description of how these organizations were selected.) Permanent authority provides more than half of available budgetary resources for 5 organizations: the Social Security Administration (93 percent), Treasury (86 percent), HHS (56 percent), the Railroad Retirement Board (54 percent), and Labor (51 percent). However, about two-thirds of the organizations received less than 10 percent of their available budgetary resources from permanent authority. For many federal organizations—including HHS; the Departments of Housing and Urban Development (HUD), Labor, the Interior, the Treasury, and Transportation; the General Services Administration (GSA); and the Office of Personnel Management (OPM)—current authority is less than half of total available budgetary resources. None of the major organizations has more than 80 percent of available budgetary resources stemming from current authority. Offsetting collections provided between 10 percent and 24 percent of available resources for 9 of 15 major organizations: the Department of Defense (DOD), Energy, Commerce, Agriculture, HUD, Justice, Labor, State, and the Interior. Five other organizations had more than 50 percent of budgetary resources in collections—the Postal Service, the Tennessee Valley Authority (TVA), GSA, the Resolution Trust Corporation (RTC), and the Small Business Administration (SBA). Resources from prior year authority comprise 13 percent of all budgetary resources to federal organizations; however, they are a significant percentage of resources for only one major organization, HUD (49 percent). Although general funds are the principal fund type for most federal organizations, there are some notable exceptions. (See appendix II, figure II.6 on page 46.) Trust funds are the dominant fund type in the Social Security Administration (90 percent of budgetary resources), the Railroad Retirement Board (87 percent), OPM (75 percent), and the Departments of Transportation (74 percent) and Labor (53 percent). However, trust funds provided less than 10 percent of the budgetary resources to more than two-thirds of the organizations. Public enterprise funds are dominant for TVA, the Federal Deposit Insurance Corporation, RTC, and the Postal Service (all about 100 percent), but about one-third of the organizations did not receive any budgetary resources from these funds. Intragovernmental revolving funds are a significant fund type only for GSA (98 percent); about half of the organizations did not receive resources from this fund type. Special funds provided less than 1 percent of budgetary resources to federal organizations but were about 17 percent of the budgetary resources of both the Interior and the Legislative Branch. The budget accounts acted on by appropriations subcommittees represent a smaller universe than the 1,303 accounts with $2.5 trillion in available budgetary resources that we analyzed to this point. In fiscal year 1995, appropriations subcommittees provided $903 billion in available budgetary resources to 860 accounts. This smaller universe of accounts and resources results from excluding (1) resources provided by authorizing committees and (2) resources available from prior years. Current authority is provided exclusively through the appropriations process. Not surprisingly, current authority was the principal budgetary resource among the appropriations subcommittees, comprising almost 80 percent of budgetary resources appropriated. Permanent authority provides more than 10 percent of resources for only one subcommittee—the Labor, Health and Human Services, Education and Related Agencies subcommittee. Offsetting collections are associated with 39 percent of subcommittee accounts and provide 15 percent of budgetary resources. The subcommittees in which offsetting collections represent a larger share of resources are Defense (29 percent); Transportation and Related Agencies (27 percent); Treasury, Postal Service, and General Government (26 percent); Military Construction (20 percent); and Energy and Water Development (19 percent). (See appendix II, figure II.4 on page 42.) Similarly, budget accounts acted on by appropriations subcommittees are largely associated with the general fund. For example, 96 percent or more of budgetary resources come from general funds for all subcommittees except the following: Energy and Water Development (89 percent general funds); Defense (78 percent general fund, 22 percent intragovernmental revolving Treasury, Postal Service and General Government (76 percent general funds, 24 percent intragovernmental revolving funds); and Transportation and Related Agencies (69 percent general funds, 26 percent trust funds). Overall, less than 1 percent of all subcommittee resources were appropriated to accounts with special funds and public enterprise funds. (See appendix II, figure II.7 on page 50.) As the preceding discussion has shown, each dimension or characteristic can provide some insight into the federal budget account structure. The following discussion highlights some of the patterns which can be detected when one or more of the variables is mapped against the others. However, this represents only a preliminary and high-order analysis. Each dimension discussed in this report also could be analyzed at the individual budget account level. This kind of detailed analysis would be needed to address specific questions or to comment on cross-cutting proposals. As discussed earlier, the account structure is comprised of a few very large accounts and many very small accounts (see figure 1). This raises the question of whether the largest and smallest accounts have different characteristics. Table 4 helps to answer this question. It separates accounts into two groups—“large,” which we have defined arbitrarily as containing over $1 billion, and “small,” which we have defined arbitrarily as containing less than $1 million. While no dominant order or design emerges, tendencies can be discerned. Although the number of accounts with budgetary resources over $1 billion and under $1 million is roughly similar, over 94 percent of all budgetary resources are concentrated in the accounts over $1 billion. For the 361 accounts in these two categories, each of the four orientations was found in the group of accounts over $1 billion and in the group of accounts under $1 million. However, accounts over $1 billion are oriented to programs more often than accounts under $1 million, which emphasize processes. Accounts under $1 million are more likely to be trust funds and less likely to receive current authority and offsetting collections than accounts over $1 billion. Accounts over $1 billion are more likely than accounts under $1 million to be general funds and to receive current authority and offsetting collections. The preceding analysis helps to explain a budget account structure marked by a wide and unequal distribution of accounts by size. Analyzing median account sizes—the budgetary resources level at which half the accounts in a particular dimension are above and half are below—is another method to deal with this persistent pattern. The median identifies representative accounts because it is unaffected by a few extremely large or small values. In table 4, the large accounts have a median of $3.2 billion, while the small accounts’ median is $206,000. Figures III.1, III.2, and III.3 on pages 52, 53, and 54 in appendix III present detailed information about account medians, the total number of accounts, and the ranges of budgetary resources for missions, organizations, and appropriations subcommittees. The following are some of the more interesting patterns shown in these figures. The medicare, social security, and net interest missions have the largest medians and the fewest accounts. General government and community and regional development—missions with more diverse and numerous accounts—have the smallest medians. Five federal organizations (the Social Security Administration, RTC, TVA, OPM, and the Federal Deposit Insurance Corporation) have comparatively large account medians; the Legislative Branch has the smallest account median. The Defense appropriations subcommittee has the largest median account size, while the Legislative Branch subcommittee has the smallest. Organizations with similar account medians can have quite different profiles. The Department of Justice (DOJ) and HUD have roughly equal median account sizes (about $135 million). DOJ has $18.8 billion in budgetary resources, with 10 percent of its accounts over $1 billion, and it has no public enterprise funds. In contrast, HUD has resources of $75.9 billion, with 20 percent of its accounts over $1 billion, and about 25 percent of its accounts are public enterprise funds. There also are some similarities. Both organizations are reviewed by one appropriations subcommittee and implement programs in about the same number of missions. Median account size also varies by fund type. (See appendix III, figure III.4 on page 55.) The median of intragovernmental revolving funds is three times that of public enterprise funds—the next largest median—due to large revolving funds in DOD and GSA. Although the medians for public enterprise and general funds are similar, there are 116 public enterprise funds and 856 general funds. Trust funds have the smallest median, despite the existence of some very large trust funds such as social security, medicare, and highways. Finally, combining the dimensions discussed in this report into an overall matrix discloses the rich and complex relationships within the budget account structure. Figure 2 shows the intersections between the descriptive characteristics of federal missions, federal organizations, and cognizant appropriations subcommittees. It visually presents another aspect of the complex setting that would be encountered by any cross-cutting proposals affecting the budget account structure. The following are some of the overall patterns displayed in figure 2 which follows, as well as in figures III.5, III.6, and III.7 on pages 56, 57, and 58 of appendix III. Three missions are concentrated in one subcommittee each, while eight missions are addressed by five or more subcommittees. Three missions are concentrated in one organization each, while 10 missions are carried out by five or more organizations. Seven major federal organizations are considered by one appropriations subcommittee each and six others are considered by two subcommittees. Five subcommittees review parts of DOD and HHS. Two subcommittees appropriate to more than 10 federal organizations and two others address 8 and 7 federal organizations, respectively. Five subcommittees deal with 2 or fewer federal organizations. The descriptive overview discussed in this report represents only a first order of analysis. As proposals suggesting across-the-board changes to the budget account structure are generated, more detailed analyses, keyed to the specific proposals, will be necessary. However, this preliminary work demonstrates both the extent of analysis which can be performed and its potential utility. We would be pleased to work with your offices to determine other avenues of inquiry. We provided a draft of this report to OMB for technical review and comment. Their comments have been included as appropriate. We are sending copies of this report to the Ranking Minority Members of the Senate Committee on Appropriations, Committee on the Budget, and Committee on Governmental Affairs; the Chairmen and Ranking Members of the House Committee on Appropriations, Committee on the Budget, and Committee on Government Reform and Oversight; and other interested Members of the Congress. We also will make copies available to others upon request. If you have any questions, I can be reached at (202) 512-9573. Major contributors to this report are listed in appendix V. The federal missions listed in this report are defined by OMB’s budget function classification system, in which all accounts are assigned to one or more budget functions that generally indicate broad areas of national need. The following are descriptions of the budget functions used in this report. For a more complete description, see appendix II in A Glossary of Terms Used in the Federal Budget Process (Exposure Draft) (GAO/AFMD-2.1.1, Jan. 1993). Programs to provide judicial services, police protection, law enforcement (including civil rights), rehabilitation and incarceration of criminals, and the general maintenance of domestic order. Promoting the economic stability of agriculture and the nation’s capability to maintain and increase agricultural production. Promotion and regulation of commerce and the housing credit and deposit insurance industries, which pertain to collection and dissemination of social and economic data (unless they are an integral part of another function, such as health); general purpose subsidies to business, including credit subsidies to the housing industry; and the Postal Service fund and general fund subsidies of that fund. Development of physical facilities or financial infrastructures designed to promote viable community economies. Promoting the extension of knowledge and skills, enhancing employment and employment opportunities, protecting workplace standards, and providing services to the needy. Promoting an adequate supply and appropriate use of energy to serve the needs of the economy. General overhead cost of the federal government, including legislative and executive activities; provision of central fiscal, personnel, and property activities; and provision of services that cannot reasonably be classified in any other major function. Budget resources allocated to science and research activities of the federal government that are not an integral part of the programs conducted under any other function. Programs other than medicare whose basic purpose is to promote physical and mental health, including the prevention of illness and accidents. Support payments (including associated administrative expenses) to persons for whom no current service is rendered. Included are retirement, disability, unemployment, welfare, and similar programs, except for social security and income security for veterans, which are in other functions. Maintaining peaceful relations, commerce, and travel between the United States and the rest of the world and promoting international security and economic development abroad. Federal hospital insurance and federal supplementary medical insurance, along with general fund subsidies of these funds and associated offsetting receipts. Common defense and security of the United States, including raising, equipping, and maintaining of armed forces; development and utilization of weapons systems; direct compensation and benefits paid to active military and civilian personnel; defense research, development, testing, and evaluation; and procurement, construction, stockpiling, and other activities undertaken to directly foster national security. Developing, managing, and maintaining the nation’s natural resources and environment. Transactions which directly give rise to interest payments or income (lending) and the general short fall or excess of outgo over income arising out of fiscal, monetary, and other policy considerations and leading to the creation of interest-bearing debt instruments (normally the public debt). Federal old age and survivors and disability insurance trust funds, along with general fund subsidies of these funds and associated offsetting collections. Providing for the transportation of the general public and/or its property, regardless of whether local or national and regardless of the particular mode of transportation. Included are construction of facilities; purchase of equipment; research, testing, and evaluation; provision of communications related to transportation; operating subsidies for transportation facilities and industries; and regulatory activities directed specifically toward the transportation industry rather than toward business. Programs providing benefits and services, the eligibility for which is related to prior military service, but the financing of which is not an integral part of the costs of national defense. The extent of restriction or earmarking within accounts. The five fund types discussed in this report follow. Accounts containing resources to be expended for the general support of the federal government. Accounts similar to public enterprise funds except that their governmental receipts primarily come from other government agencies and accounts. Accounts authorized by law to be credited with offsetting collections, primarily from the public, that are generated by and earmarked to finance a continuing cycle of business-type operations. Accounts whose resources are earmarked by law for specific purposes. Accounts designated by law as “trust funds” and earmarked for specific purposes and programs according to the terms of a trust agreement or a statute. The focus of the Congress in creating an account. The four orientations discussed in this report follow. Focus on the specific items—the objects of expenditure—needed to operate a governmental unit. Examples are accounts containing expenses for salaries or equipment. The predominant goal expressed through an object orientation is to control spending. Focus on a specific governmental unit. This account orientation emphasizes the accountability of, rather than control over, the organization receiving the budgetary resources—that is, from a focus on objects to a focus on stewardship. Focus on the operations, approaches, and activities underlying federal programs, reflecting congressional interest in those processes. Examples include accounts for working capital funds or inspection services. Focus on the missions and objectives of governmental units, reflecting congressional attention to the purpose, program, or activity of government. Examples of such accounts include the fiscal year 1995 appropriations to the National Aeronautics and Space Administration for Human Space Flight and for Science, Aeronautics and Technology—accounts which emphasize the purpose and major programs of the organization. These accounts replaced accounts for Construction of Facilities and for Research and Development, which emphasized activities and processes. The source of funding for accounts. Four resource types are discussed in this report. Resources provided by the Congress in, or immediately prior to, the fiscal year or years during which the funds are available for obligation. Resources arising as collections from government or public sources for business-type transactions. Laws authorize collections to be credited directly to accounts and may make them available for obligation to meet the account’s purpose without further legislative action. Usually a form of permanent authority, offsetting collections are separately discussed in this report. Resources available as a result of previously enacted legislation and not requiring new legislation for the current year. Current or permanent authority provided and available in a previous fiscal year that remains available in the current fiscal year. Accounts (58.5%) (10.4%) (90.9%) (10.3%) (85.3%) (37.5%) (83.3%) (30.3%) (82.7%) (8.7%) (67.4%) (12.5%) Excludes offsetting collections. Number of accounts will not total because some accounts have multiple resource types. Accounts (7.9%) Budgetary resources (33.3%) Accounts (35.3%) (4.3%) (2.3%) (0.0%) (90.9%) (89.1%) (15.5%) (8.2%) (89.7%) (53.7%) (61.1%) (47.6%) (7.4%) (2.5%) (67.6%) (87.2%) (14.5%) (3.0%) (21.0%) (46.0%) (36.8%) (12.6%) A fund type generally is dominated by a resource type. -- General funds - current authority -- Special and trust funds - permanent authority -- Public enterprise and intragovernmental revolving funds - offsetting collections (55.4%) (11.0%) (66.7%) (11.1%) Commerce and housing credit Community and regional development (78.6%) (49.3%) (29.4%) (36.5%) Education, training, employment, and social services (61.2%) (10.4%) (95.1%) (12.2%) (55.8%) (7.8%) General science, space and technology (58.8%) (9.8%) (67.6%) (12.9%) (69.7%) (20.0%) (64.6%) (62.9%) (0.0%) (0.0%) (72.4%) (10.2%) (80.9%) (17.7%) (0.0%) (0.0%) (33.3%) (0.0%) (76.1%) (29.2%) (73.7%) (26.0%) (67.4%) (12.5%) accounts in the income security mission. Number of accounts does not total because some accounts have multiple resource types. Accounts (21.7%) (7.7%) Accounts 34 (37.0%) Budgetary resources (12.5%) (25.9%) (30.2%) (48.1%) (43.0%) (11.9%) (19.2%) (5.6%) (4.3%) (53.6%) (37.0%) (62.0%) (14.5%) (23.3%) (11.9%) (23.3%) (4.7%) (7.3%) (7.9%) (39.0%) (55.0%) (20.2%) (24.9%) (32.5%) (35.6%) (11.8%) (0.2%) (35.3%) (4.7%) (29.4%) (17.1%) (55.9%) (13.0%) (33.7%) (45.4%) (21.3%) (4.0%) (17.7%) (14.0%) (20.4%) (3.4%) (100.0%) (82.8%) (0.0%) (0.0%) (8.6%) (2.8%) (56.0%) (24.8%) (28.4%) (4.4%) (37.7%) (15.6%) (85.7%) (100.0%) (0.0%) (0.0%) (100.0%) (99.3%) (33.3%) (0.7%) (17.4%) (39.0%) (34.8%) (8.4%) (17.5%) (3.0%) (50.9%) (7.8%) (21.0%) (46.0%) (36.8%) (12.6%) Each resource type is concentrated in only a few missions. -- Four missions received 70 percent of funds available from prior years. -- Three missions received 63 percent of current authority. -- Four missions received 89 percent of permanent authority. -- Two missions received 63 percent of offsetting collections. All but four missions have authority available from prior years in more than half of their accounts. However, prior-year funding provides significant resources to only one mission, international affairs. Permanent authority is a significant source of funds for only three missions, medicare, net interest, and social security. Less than 35 percent of accounts in other missions have available permanent authority. Offsetting collections are found in over half of the accounts of only four missions: commerce and housing credit, health, national defense, and veterans benefits and services. Offsetting collections provide over half of resources to commerce and housing credit and energy. Accounts (65.7%) Budgetary resources (5.7%) (88.6%) (15.3%) Department of Defense Department of Education (73.8%) (56.3%) (9.7%) (10.2%) (100.0%) (15.2%) Department of Health and Human Services (56.8%) (3.9%) Department of Housing and Urban Development (71.1%) (49.3%) (58.3%) (12.9%) (45.8%) (14.6%) (50.0%) (11.4%) (83.2%) (24.5%) Department of the Treasury Department of Transportation Department of Veterans Affairs (70.0%) (77.3%) (70.8%) (8.3%) (30.1%) (26.1%) (78.6%) (21.8%) Executive Office of the President/Funds Appropriated to the President (60.3%) (53.5%) Federal Deposit Insurance Corporation Federal Emergency Management Agency (60.0%) (56.7%) (70.0%) (64.8%) (75.0%) (27.2%) (11.8%) (52.4%) (16.4%) National Aeronautics and Space Administration (54.5%) (10.9%) (62.5%) (1.5%) (44.4%) (21.2%) Accounts (20.1%) Budgetary resources (21.6%) Accounts (38.1%) 51 (19.1%) (6.8%) (1.0%) (50.0%) (21.3%) (12.7%) (21.9%) (8.9%) (15.8%) (55.6%) (18.8%) (24.0%) (4.5%) (8.3%) (3.5%) (36.1%) (22.2%) (31.8%) (55.7%) (47.7%) (1.2%) (11.1%) (1.4%) (28.9%) (15.2%) (18.8%) (7.1%) (41.7%) (14.9%) (20.8%) (50.8%) (58.3%) (14.8%) (17.6%) (8.6%) (29.4%) (10.1%) (44.6%) (15.7%) (31.7%) (10.2%) (44.0%) (19.3%) (14.6%) (85.7%) (40.5%) (2.6%) (44.0%) (34.1%) (56.3%) (3.5%) (7.6%) (7.8%) (7.1%) (0.0%) (57.1%) (2.5%) (16.7%) (24.7%) (21.8%) (0.4%) (20.0%) (2.6%) (80.0%) (40.6%) (20.0%) (0.0%) (60.0%) (18.9%) (16.7%) (0.1%) (66.7%) (69.3%) (38.1%) (5.6%) (14.3%) (5.5%) (19.0%) (8.3%) (20.2%) (22.0%) (27.3%) (0.0%) (27.3%) (5.0%) (12.5%) (0.9%) (37.5%) (5.4%) (22.2%) (49.0%) (55.6%) (18.4%) (Continued) See notes on following page (0.0%) (0.0%) (57.1%) (44.8%) (50.0%) (47.0%) (100.0%) (19.5%) (81.8%) (20.7%) (60.0%) (0.3%) (100.0%) (0.3%) (52.9%) (7.5%) (59.9%) (67.4%) (40.8%) (12.5%) social security mission. (7.0%) Budgetary resources (92.8%) (71.4%) (54.1%) (14.3%) (0.0%) (0.0%) (0.0%) (50.0%) (52.9%) (0.0%) (0.0%) (66.7%) (53.9%) (9.1%) (0.0%) (9.1%) (0.0%) (100.0%) (92.5%) (60.0%) (1.4%) (100.0%) (15.0%) (100.0%) (82.7%) (11.8%) (0.2%) (17.6%) (2.3%) (16.9%) (4.8%) (26.8%) (20.9%) (21.0%) (46.0%) (36.8%) (12.6%) Each resource type is concentrated in only a few federal organizations. -- The Departments of Defense, Housing and Urban Development, the Treasury, and the Executive Office of the President/Funds Appropriated to the President have nearly 50 percent of resources available from prior years. -- The Departments of Defense and Health and Human Services have over 50 percent of current authority. -- The Departments of Health and Human Services, the Treasury, and the Social Security Administration have over 80 -- The Department of Defense receives over one-third of all offsetting collections. However, every major organization receives funding from all sources. Resources available from prior years are found in nearly every organization. With the exception of the Postal Service, over 40 percent of every organization's accounts have some funding available from previous years. However, these resources provide more than half of funding for only three organizations. Permanent authority is not a significant source of funding for most organizations. Only nine organizations receive more than 20 percent of their resources in permanent authority. Offsetting collections are not a significant source of funding for most agencies. However, five agencies receive more than half their funding through business-type transactions with either the public or other government entities. (97.6%) (87.2%) Commerce, Justice, State, Judiciary and Related Agencies (97.2%) (85.2%) (98.5%) (100.0%) (71.2%) (100.0%) (92.5%) (81.5%) (97.5%) (99.1%) (97.9%) (89.5%) Labor, Health and Human Services, Education and Related Agencies (90.5%) (79.6%) (97.9%) (94.7%) (100.0%) (79.6%) (84.7%) (73.7%) Treasury, Postal Service and General Government Veterans Affairs, HUD and Independent Agencies (93.4%) (74.0%) (98.0%) (95.7%) (95.5%) (79.0%) Excludes offsetting collections. Number of accounts will not total because some accounts have multiple resource types. The budget accounts associated with appropriations subcommittees represent a smaller universe than the 1,303 accounts with $2.5 trillion in budgetary resources used in the other analyses in this report. Accounts 31 (37.3%) (3.0%) (41.3%) (14.7%) (69.7%) (0.0%) (28.8%) (0.0%) (37.7%) (18.5%) (15.0%) (0.8%) (39.4%) (7.9%) (34.7%) (2.6%) (12.8%) (4.6%) (50.0%) (20.4%) (40.7%) (26.6%) (52.5%) (25.6%) (29.4%) (2.6%) (38.6%) (15.0%) Every appropriations subcommittee provides current authority to 80 percent or more of its accounts; however, three subcommittees appropriate over three-quarters of all current authority. Only one subcommittee provides access to significant amounts of permanent authority. -- 18 percent of budget authority from the Labor, Health and Human Services, Education and Related Agencies subcommittee is permanent. Four subcommittees provide just over 20 percent of resources using offsetting collections, while about half provide less than 5 percent. (73.9%) (84.8%) (59.3%) (18.0%) Commerce and housing credit Community and regional development (46.4%) (67.1%) (4.3%) (72.7%) Education, training, employment, and social services (78.6%) (97.4%) (65.9%) (40.2%) (71.2%) (55.5%) General science, space and technology (88.2%) (99.8%) (70.6%) (85.0%) (65.2%) (54.5%) (77.9%) (50.8%) (33.3%) (19.0%) (75.0%) (79.9%) (51.2%) (72.7%) (100.0%) (100.0%) (33.3%) (1.4%) (55.4%) (25.5%) (50.9%) (67.2%) (65.7%) (54.6%) accounts in the income security mission. Budgetary resources $0 (0.0%) Budgetary resources (8.8%) Accounts (4.3%) Budgetary resources (0.6%) (18.5%) (80.2%) (1.9%) (0.0%) (18.5%) (1.0%) (29.8%) (17.8%) (94.6%) (14.3%) (15.5%) (4.1%) (0.6%) (1.7%) (4.8%) (9.6%) (0.3%) (10.4%) (1.0%) (0.1%) (1.9%) (0.0%) (16.5%) (2.3%) (14.6%) (55.2%) (17.1%) (4.6%) (2.4%) (0.0%) (6.1%) (0.7%) (8.0%) (2.8%) (8.0%) (0.1%) (0.0%) (0.0%) (0.0%) (0.0%) (11.8%) (0.2%) (11.8%) (0.1%) (0.0%) (0.0%) (14.7%) (14.3%) (6.7%) (2.4%) (2.2%) (0.3%) (24.7%) (42.8%) (8.0%) (35.5%) (1.8%) (0.0%) (10.6%) (13.7%) (0.0%) (0.0%) (0.0%) (0.0%) (66.7%) (81.0%) (3.4%) (0.1%) (5.2%) (0.0%) (9.5%) (0.2%) (4.3%) (1.3%) (23.5%) (6.1%) (17.3%) (10.4%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (66.7%) (98.6%) (7.6%) (2.7%) (2.2%) (0.1%) (30.4%) (70.8%) (24.6%) (6.1%) (3.5%) (0.2%) (19.3%) (25.3%) (8.9%) (9.5%) (8.3%) (0.4%) (13.7%) (30.8%) General funds can be found in every mission and provide most of the resources for all but six missions. All missions have some trust funds, except net interest, but only medicare, social security, and transportation have more than half of their resources in these funds. (72.4%) (63.0%) (61.4%) (91.0%) Department of Defense Department of Education (67.5%) (90.6%) (73.9%) (99.9%) (66.7%) (82.9%) Department of Health and Human Services (77.3%) (53.5%) Department of Housing and Urban Development (62.2%) (71.2%) (72.9%) (80.5%) (66.7%) (33.0%) (76.5%) (91.4%) (43.6%) (59.4%) Department of the Treasury Department of Transportation (62.0%) (53.4%) (88.4%) (23.5%) (52.1%) (67.6%) (50.0%) (79.7%) Executive Office of the President/Funds Appropriated to the President (75.6%) (74.2%) Federal Deposit Insurance Corporation Federal Emergency Management Agency (20.0%) (0.0%) (60.0%) (78.9%) (33.3%) (1.9%) (61.9%) (85.1%) (75.0%) (59.6%) National Aeronautics and Space Administration (72.7%) (99.9%) (87.5%) (98.9%) (55.6%) (24.4%) Accounts (9.0%) Budgetary resources (33.7%) Accounts (6.7%) Budgetary resources (2.0%) Accounts (10.4%) Budgetary resources (0.8%) (11.4%) (3.9%) (13.6%) (0.9%) (9.1%) (0.0%) (3.2%) (3.1%) (0.1%) (0.1%) (7.1%) (0.0%) (0.1%) (0.0%) (15.1%) (6.3%) (6.6%) (0.0%) (8.3%) (12.9%) (22.2%) (4.2%) (2.8%) (0.0%) (9.1%) (0.0%) (0.0%) (0.0%) (9.1%) (46.2%) (26.7%) (28.5%) (6.7%) (0.0%) (2.2%) (0.0%) (0.0%) (0.0%) (18.8%) (10.2%) (4.2%) (1.2%) (4.2%) (13.6%) (4.2%) (0.1%) (16.7%) (52.9%) (0.0%) (0.0%) (11.8%) (0.1%) (8.8%) (7.3%) (5.0%) (3.1%) (33.7%) (17.5%) (13.9%) (16.2%) (8.0%) (6.8%) (8.7%) (2.0%) (14.0%) (2.3%) (0.2%) (0.1%) (10.0%) (33.0%) (0.1%) (73.6%) (29.2%) (6.2%) (2.1%) (0.2%) (14.6%) (25.0%) (14.3%) (0.3%) (7.1%) (0.0%) (28.6%) (20.0%) (9.0%) (1.2%) (1.3%) (0.0%) (11.5%) (24.5%) (60.0%) (99.9%) (0.0%) (0.0%) (0.0%) (0.0%) (30.0%) (21.1%) (0.0%) (0.0%) (10.0%) (0.0%) (8.3%) (0.0%) (16.7%) (0.1%) (8.3%) (0.0%) (0.0%) (0.0%) (19.0%) (6.8%) (19.0%) (8.1%) (9.5%) (0.1%) (3.6%) (17.0%) (9.5%) (1.8%) (0.0%) (0.0%) (0.0%) (0.0%) (27.3%) (0.1%) (0.0%) (0.0%) (0.0%) (0.0%) (12.5%) (1.1%) (0.0%) (0.0%) (0.0%) (0.0%) (33.3%) (75.4%) (Continued) See notes on following page (66.7%) (0.2%) (42.9%) (13.2%) (50.0%) (0.2%) (44.4%) (52.7%) (90.9%) (100.0%) (60.0%) (10.0%) (0.0%) (0.0%) (70.6%) (99.0%) (70.4%) (65.7%) (40.8%) (54.6%) social security mission. (33.3%) (99.8%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (57.1%) (86.8%) (50.0%) (99.8%) (0.0%) (0.0%) (0.0%) (0.0%) (44.4%) (47.3%) (0.0%) (0.0%) (11.1%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (9.1%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (40.0%) (90.0%) (100.0%) (100.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (29.4%) (1.0%) (9.9%) (8.9%) (54.5%) (9.5%) (2.8%) (8.3%) (0.2%) (0.4%) (16.2%) (13.7%) (4.5%) (30.8%) All organizations except the Tennessee Valley Authority have general funds. Over 85 percent of organizations have trust funds, and nearly 70 percent have public enterprise funds. Over 60 percent of the organizations have special funds or intragovernmental revolving funds. Only two of the major organizations have most of their resources outside of general funds. The Department of Transportation has over 70 percent of its resources in trust funds, and the Department of Labor has over 50 percent in trust funds. (95.2%) (99.1%) Commerce, Justice, State, Judiciary and Related Agencies (88.1%) (98.3%) (86.4%) (100.0%) (78.1%) (100.0%) (64.2%) (89.0%) (97.5%) (99.7%) (80.9%) (96.4%) Labor, Health and Human Services, Education and Related Agencies (89.5%) (97.0%) (100.0%) (100.0%) (93.8%) (100.4%) (64.4%) (69.0%) Treasury, Postal Service and General Government Veterans Affairs, HUD and Independent Agencies (88.5%) (76.1%) (87.3%) (97.1%) (86.0%) (88.5%) Negative values indicate net resource reduction. The budget accounts associated with appropriations subcommittees represent a smaller universe than the 1,303 accounts with $2.5 trillion in budgetary resources used in the other analyses in this report. (0.4%) Accounts (1.2%) Budgetary resources (0.1%) Accounts (0.0%) Budgetary resources $0 (0.0%) (3.5%) (0.8%) (4.2%) (0.3%) (3.5%) (0.2%) (4.5%) (0.0%) (0.0%) (0.0%) (4.5%) (0.0%) (0.0%) (0.0%) (1.5%) (0.0%) (0.0%) (0.0%) (9.4%) (2.0%) (18.9%) (5.8%) (5.7%) (2.1%) (2.5%) (0.3%) (0.0%) (0.0%) (0.0%) (0.0%) (1.1%) (0.0%) (13.8%) (3.3%) (3.2%) (0.1%) (2.1%) (0.0%) (0.0%) (0.0%) (7.4%) (2.9%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (0.0%) (6.3%) (-0.4%) (0.0%) (0.0%) (0.0%) (0.0%) (3.4%) (3.0%) (3.4%) (0.2%) (23.7%) (26.3%) (0.0%) (0.0%) (4.9%) (0.1%) (0.0%) (0.0%) (6.9%) (1.2%) (1.0%) (0.0%) (4.9%) (1.7%) (3.4%) (0.3%) (4.5%) (0.3%) (4.4%) (1.7%) All appropriations subcommittees but one provide at least three-quarters of their resources to general funds. Only the Transportation and Related Agencies subcommittee provides more than 25 percent of its resources to another fund type (trust funds). National defense General science, space and technology Veterans benefits and services Commerce and housing credit Education, training, employment, and social services Transportation Natural resources and environment Community and regional development General government Account medians by mission range widely. The largest median is almost 7,000 times larger than the smallest. Each of the three missions with the highest medians has a narrow focus compared to other missions. National Aeronautics and Space Administration Small Business Administration Department of Justice Department of Housing and Urban Development Executive Office of the President/Funds Appropriated to the President General Services Administration The Judiciary Medians of federal organizations are distributed into two groups. The five largest medians range from $32 billion to $3 billion; then there is a gap of $3 billion, and the account medians decline steadily. Labor, Health and Human Services, Education and Related Agencies Veterans Affairs, HUD and Independent Agencies Commerce, Justice, State, Judiciary and Related Agencies Treasury, Postal Service and General Government Negative values indicate largest net resource reductions made by appropriations subcommittees. The budget accounts associated with appropriations subcommittees represent a smaller universe than the 1,303 accounts with $2.5 trillion in resources used in other analyses in this report. The median account in the Defense appropriations subcommittee is larger than those in other subcommittees -- over six times that of the Labor, Health and Human Services, Education and Related Agencies subcommittee and over 40 times larger than that of the Treasury, Postal Service and General Government subcommittee. Account with least resources $120 The median of intragovernmental revolving funds is three times greater than the next largest median, public enterprise funds, due to large intragovernmental revolving funds in the Department of Defense and the General Services Administration. Federal mission Administration of justice Community and regional development Education, training, employment and social services General science, space and technology Net interest is excluded from this analysis because it is not affected by appropriations subcommittees. The "number of federal organizations" column counts the Executive Office of the President/Funds Appropriated to the President, the Legislative Branch, and the Judiciary, as well as independent agencies not separately listed in figure III.6, as single entities. This column excludes federal mission/organization relationships defined by authorizing committees. Most missions are performed by several organizations and funded by several subcommittees. -- Over half of missions are performed by at least five organizations. -- Over half of missions are funded by four or more subcommittees. Federal organization Department of Agriculture Department of Education Department of Energy Department of Health and Human Services Department of Housing and Urban Development Department of Labor Department of State Environmental Protection Agency Executive Office of the President/Funds Appropriated to the President Export-Import Bank of the United States General Services Administration The Judiciary National Aeronautics and Space Administration Railroad Retirement Board Small Business Administration Social Security Administration United States Information Agency This column excludes federal mission/organization relationships defined by authorizing committees. The accounts of most organizations are reviewed by only one or two subcommittees. For major organizations, subcommittees generally fund the performance of three or more missions. Appropriations subcommittee Agriculture and Related Agencies Commerce, Justice, State, Judiciary and Related Agencies District of Columbia Energy and Water Development Labor, Health and Human Services, Education and Related Agencies Treasury, Postal Service and General Government Veterans Affairs, HUD and Independent Agencies The "number of federal organizations" column counts the Executive Office of the President/Funds Appropriated to the President, the Legislative Branch, and the Judiciary, as well as independent agencies not separately listed in figure III.6, as single entities. This column also excludes federal mission/organization relationships defined by authorizing committees. Most appropriations subcommittees fund multiple missions and organizations. However, a few fund a single mission or organization. -- The Defense, District of Columbia, Foreign Assistance, and Military Construction subcommittees fund only one mission each. -- The District of Columbia, Legislative Branch, and Military Construction subcommittees fund only one organization each. The account-level data that formed the basis for this report were extracted from automated information collected and maintained by the Office of Management and Budget (OMB). As part of its annual process to develop the President’s budget, OMB uses its MAX budget system to collect a wide variety of information from all branches of government, from executive departments and organizations, and from independent agencies. We obtained MAX data for the fiscal year 1996 budget, which included the current year 1995 estimates used in this report. Although we did not independently verify the extracted data for each budget account, we reconciled total budget authority and certain data from a judgmentally selected subset of accounts to the published Budget of the U.S. Government, Fiscal Year 1996—Appendix. To define a universe of budget accounts, we extracted current year 1995 estimates of gross budgetary resources from the 1996 budget data. Gross budgetary resources are equivalent to gross budget authority given in the current year and available for obligation from prior years, net of transfers out, enacted rescissions, and statutory limitations on the use of authority.Any regular account reporting such resources was extracted and compiled into a separate database we used for this project. We verified our approach with OMB to ensure that we were accurately identifying and collecting budgetary data consistent with the concept of gross budgetary resources. We chose gross budgetary resources as the operational definition of a budget account for two reasons. First, it is a concept that fulfills a wide variety of federal budget needs, from congressional oversight and appropriations to program management. Second, this definition ensured the largest possible universe of accounts with available budget authority for our study. In effect, this approach caused a selection of all budget accounts which had available and enacted spending authority and thus could be the source of federal commitments (or obligations) during fiscal year 1995. Budget authority is heterogeneous in nature. It is provided in many different forms, covers many different time periods, and reflects congressional expectations ranging from indefinite authority that may never be used (such as borrowing authority) to definite authority which generally is expected to be outlayed in the current year (such as current appropriations). As a result, it is not additive, either across programs or organizations for a specific year or across a series of years for one program or organization. However, even with this qualification, we have aggregated budgetary resources in this report, as OMB does in its annual Historical Tables. Just as OMB recognizes the need for historical data on this subject, we recognize a comparable need for current data to express the totality of current obligational authority. For each account reporting gross budgetary resources, we extracted a variety of information from the OMB MAX system. To determine the federal organization, we extracted the agency code for each account. In the analyses contained in this report, we identify the major branches and organizations that are separately presented in the President’s budget. Other entities listed as “other independent agencies” in the budget are separately identified in specific analyses in this report when the entity represents a significant number of accounts and/or budgetary resources. Thus, the actual federal organizations listed in analyses based on appropriations subcommittees will differ slightly from those used in other analyses. To determine the congressional decision-making structure, we extracted the appropriations bill originating subcommittee code for each account. The MAX system indicates both the spending jurisdiction (appropriations or authorizing committee) and the specific appropriations subcommittee associated with each authority action for each account. For analyses in which appropriations subcommittee was a variable, only accounts (and resources within those accounts) for which a specific appropriations subcommittee took action are included. In addition, because the MAX system does not indicate the appropriations subcommittee associated with budgetary resources available from prior years, analyses involving appropriations subcommittees and resource types exclude any budgetary resources available from prior years. Therefore, the universe of accounts and budgetary resources for analyses involving appropriations subcommittees is smaller than the universe used in the remainder of this report. To comment on the mission(s) addressed by the budgetary resources of an account, we extracted budget function information. The budget function classification system describes 18 broad areas of national need and was developed to provide a coherent and comprehensive basis for analyzing and understanding the budget. In some cases, the budgetary resources of a single account may be coded to multiple functions. In these cases, we assigned the account to the budget function which received the majority of the account’s budget authority in fiscal year 1995. However, budgetary resources in multi-function accounts were allocated to each of the functions according to the distribution of budget authority in the account. To determine the type of budgetary resources available to an account, we aggregated data from the OMB MAX system into four components: prior year authority, current year authority, permanent authority, and offsetting collections. Prior year authority reflects the balances of budget authority from previous years that remain available for obligation. Current authority is budget authority provided in fiscal year 1995 appropriations acts, while permanent authority is provided in standing authorizing legislation. Offsetting collections represent authority that results from the receipt of collections from other government accounts or collections from the public that are of a business-type or market-oriented nature. Offsetting collections are usually a form of permanent authority. However, in this report, we segregated offsetting collections because the cyclical or business-type nature of such authority differs from other permanent authority. To determine the extent of restrictions associated with the resources within an account, we extracted fund type information from the OMB MAX system. All governmental activities are financed through federal funds or trust funds. Federal funds are further separated into (1) general funds or special funds, to distinguish unrestricted receipts from those that are earmarked for a specific purpose, and (2) public enterprise funds or intragovernmental revolving funds, to distinguish between receipts arising from a cycle of business-type operations from the public or governmental organizations. Although trust funds are segregated into revolving and nonrevolving funds, for this report, we have aggregated both into a single trust fund category. We computed median accounts, in terms of budgetary resources, to determine representative account sizes for federal mission, federal organization, appropriations subcommittee, and fund type. Statistical medians are particularly useful when wide variations exist across multiple dimensions. Unlike other statistical measures of central tendency (for example, averages), medians are not influenced by a few extremely large or small values and thus yield a more representative account size. We displayed ranges of account size with the median to illustrate variability. To describe the predominant interests of the Congress in enacting an account, we defined four orientations—object, organization, process, and program. These orientations were derived from a literature search which focused on the key events or developments associated with federal budgeting practices and are not directly associated with a particular data element in the OMB MAX system. Using the definitions we developed for each of these orientations, we reviewed each of the accounts in three judgmentally selected organizations and all accounts with more than $1 billion and less than $1 million in resources. To aid in assigning the accounts to an orientation, we reviewed the account’s title, statutory language, narrative statement, and budget presentations. Many accounts possess aspects which made assignment to a single orientation very difficult and somewhat arbitrary. Thus, the assignments presented in this report should be seen as suggestive or indicative, rather than definitive. Lastly, to develop historical and anecdotal data on the federal budget account structure, we performed literature searches and conducted interviews with budget specialists in academia, OMB, the Congressional Research Service, and the Congressional Budget Office. We also reviewed written documents produced by OMB, the National Performance Review, and the Federal Accounting Standards Advisory Board. Our work was performed in Washington, D.C., from April 1995 through May 1995. Managing for Results: Strengthening Financial and Budgetary Reporting (GAO/T-AIMD-95-181, July 11, 1995). Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T-AIMD-95-161, June 7, 1995). Budget Function Classification: Agency Spending by Subfunction and Object Category, Fiscal Year 1994 (GAO/AIMD-95-116FS, May 10, 1995). Budget Function Classification: Agency Spending and Personnel Levels for Fiscal Years 1994 and 1995 (GAO/AIMD-95-115FS, April 11, 1995). Budget Function Classification: Relating Agency Spending and Personnel Levels to Budget Functions (GAO/AIMD/GGD-95-69FS, Jan. 30, 1995). Management Reform: Implementation of the National Performance Review’s Recommendations (GAO/OCG-95-1, Dec. 5, 1994). Budget Object Classification: Origins and Recent Trends (GAO/AIMD-94-147, Sept. 13, 1994). Management Reform: GAO’s Comments on the National Performance Review’s Recommendations (GAO/OCG-94-1, Dec. 3, 1993). Budget Issues: Assessing Executive Order 12837 on Reducing Administrative Expenses (GAO/AIMD-94-15, Nov. 17, 1993). Budget Issues: Financial Reporting to Better Support Decision-making (GAO/AFMD-93-22, June 1993). A Glossary of Terms Used in the Federal Budget Process (GAO/AFMD-2.1.1, Jan. 1993). Budget Issues: The Use of Spending Authority and Permanent Appropriations is Widespread (GAO/AFMD-87-44, July 17, 1987). Caiden, Naomi. “Guidelines to Federal Budget Reform.” Public Budgeting and Finance, Vol. 3, Winter 1983, pp. 4-22. Caldwell, Kenneth S. “The Accounting Aspects of Budgetary Reform: Can We Have Meaningful Reform Without Significant Changes in Traditional Accounting Practices?” Governmental Finance, Vol. 7, Aug. 1978, pp. 10-17. The Commission on Organization of the Executive Branch of the Government (“The Hoover Commission”). Budgeting and Accounting, Washington, D.C.; Feb. 1949, pp. 12-13. The Commission on Organization of the Executive Branch of the Government (“The Hoover Commission”). Budget and Accounting: A Report to the Congress, Washington, D.C.; June 1955. Cothran, Dan A. “Entrepreneurial Budgeting: An Emerging Reform?” Public Administration Review, Vol. 53, No. 5, Sept./Oct. 1993, pp. 445-454. Doost, Roger K. “The Importance of Budget Execution in Governments.” The Government Accountants Journal, Vol. 35, No. 1, Spring 1986, pp. 1-7. Draper, Frank, and Bernard Pitsvada. “Limitations in Federal Budget Execution.” The Government Accountants Journal, Vol. 30, No. 3, Fall 1981, pp. 15-25. Ehlers, Gerd. “The German System of Budgetary Control.” An unpublished paper dated May 20, 1994. Federal Accounting Standards Advisory Board. Managerial Cost Accounting Concepts and Standards for the Federal Government (Draft Statement), May 1995. Goldman, Frances, and Edith Brashares. “Performance and Accountability: Budget Reform in New Zealand.” Public Budgeting and Finance, Vol. 11, No. 4, Winter 1991, pp. 75-85. Grizzle, Gloria A. “Does Budget Format Really Govern the Actions of Budgetmakers?” Public Budgeting and Finance, Vol. 6, No. 1, Spring 1986, pp. 60-70. Jefferys, Marcie and Anne Kelly. “Issues in Program Budgeting.” Public Budgeting and Finance, Vol. 11, No. 4, Winter 1991, pp. 86-91. Joyce, Philip, G. “The Reiterative Nature of Budget Reform: Is There Anything New in Federal Budgeting?” Public Budgeting and Finance, Vol. 13, No. 3, Fall 1993, pp. 36-48. Keith, Robert, and Edward Davis. “Congress and Continuing Appropriations: New Variations on an Old Theme.” Public Budgeting and Finance, Vol. 5, No. 1, Spring 1985, pp. 97-100. Kettl, Donald F. “Expansion and Protection in the Budgetary Process.” Public Administration Review, Vol. 49, May/June 1989, pp. 231-239. Lazarus, Steven. “Planning-Programming-Budgeting Systems and Project PRIME,” Planning Programming Budgeting: A Systems Approach to Management. Chicago: Markham Publishing Co., 1968, pp. 358-370. McKean, Roland N. and Melvin Anshen. “Limitations, Risks, and Problems with PPB,” Planning Programming Budgeting: A Systems Approach to Management. Chicago: Markham Publishing Co., 1968, pp. 337-357. Office of the Vice President. From Red Tape to Results: Creating a Government That Works Better & Costs Less—Report of the National Performance Review. Washington, D.C.; Sept. 1993. —. From Red Tape to Results: Creating a Government That Works Better & Costs Less—Mission-Driven, Results-Oriented Budgeting, Accompanying Report of the National Performance Review. Washington, D.C.; Sept. 1993. Pitsvada, Bernard T. “The More It Changes, The More It Stays the Same: The Dilemma of the U.S. Budget.” The Government Accountants Journal, Vol. 40, No. 3, Fall 1991, pp. 48-54. Public Law 103-62, Government Performance and Results Act of 1993, 31 U.S.C. section 1105(a)(29). Premchand, A. Governmental Budgeting and Expenditure Controls: Theory and Practice. Washington, D.C.: International Monetary Fund, 1983. Rubin, Irene S. “Who Invented Budgeting in the United States?” Public Administration Review, Vol. 53, No. 5, Sept./Oct. 1993, pp. 438-444. Schick, Allen. Budget Innovation in the States. Washington, D.C.: Brookings Institution, 1971. —. Congress and Money: Budgeting, Spending and Taxing . Washington, D.C.: Urban Institute, 1980. —, editor. Perspectives on Budgeting . Washington, D.C.: American Society for Public Administration, 1987. —. Systems Politics and Systems Budgeting . Washington, D.C.: Brookings Institution, 1969. Sorber, Bram. “Performance Measurement in the Central Government Departments of the Netherlands.” Public Productivity and Management Review, Vol. 17, No. 1, Fall 1993, pp. 59-68. Staats, Elmer. “Improving the Analytical Framework for Budgetary Decision-Making,” Toward a Reconstruction of Federal Budgeting. New York: The Conference Board, 1983, pp. 65-71. Stewart, Charles, III. “Does Structure Matter? The Effects of Structural Change on Spending Decisions in the House, 1871-1922,” American Journal of Political Science, Vol. 31, Aug. 1987, pp. 584-605. Thompson, David. “Budgetary Structures in Developing Countries,” Journal of Administration Overseas, Vol. 19, No. 4, Oct. 1980, pp. 250-261. U.S. Bureau of Land Management. “The President’s Proposed FY95 Budget Request for the Bureau of Land Management: An Overview,” Washington, D.C.: 1994. U.S. Congress, Senate Committee on Governmental Affairs. Government Performance and Results Act of 1993, 103rd Congress, 1st Session, S. Rept. 103-58, June 16, 1993. Weiss, Edmond H. “The Fallacy of Appellation in Government Budgeting.” Public Administration Review, Vol. 34, No. 4, July/Aug. 1974, pp. 377-379. Wildavsky, Aaron. “A Budget for All Seasons? Why the Traditional Budget Lasts.” Public Administration Review, Vol. 38, Nov./Dec. 1978, pp. 501-509. Wooldridge, Blue, and Claire L. Alpert. “Identifying Obstacles to the Implementation of Budgetary Reform in Government.” The Government Accountants Journal, Vol. 32, Summer 1983, pp. 48-54. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided a descriptive overview of the federal budget account structure. GAO found that: (1) federal budget accounts are a product of the needs and goals of many users and address many different roles; (2) the present budget account structure was not created as a single integrated framework, but was mainly developed as separate budget accounts to respond to specific needs; (3) the budget account structure is characterized by a concentration of budgetary resources in a few large accounts and a scattering of remaining resources among hundreds of other accounts, and a mix of account orientations with an emphasis on programs and processes, rather than objects of expense or organizations; (4) over 70 percent of the total budgetary resources available in fiscal year 1995 came from sources which did not require congressional approval in the current year; and (5) general funds are used to provide budgetary resources to most accounts, with special and trust funds supporting about 30 percent of total resources and 20 percent of all accounts.
Our fiscal year 2007 budget request will provide us the resources necessary to achieve our performance goals in support of the Congress and the American people. This request will allow GAO to improve productivity and maintain progress in technology and other transformation areas. We continue to streamline GAO, modernize our policies and practices, and leverage technology so that we can achieve our mission more effectively and efficiently. These continuing efforts allow us to enhance our performance without significant increases in funding. Our fiscal year 2007 budget request represents a modest increase of about $25 million (or 5 percent) over our fiscal year 2006 revised funding level— primarily to cover uncontrollable mandatory pay and price level increases. This request reflects a reduction of nearly $5.4 million in nonrecurring fiscal year 2006 costs used to offset the fiscal year 2007 increase. This request also includes about $7 million in one-time fiscal year 2007 costs, which will not recur in fiscal year 2008, to upgrade our business systems and processes. As the Congress addresses the devastation in the Gulf Coast region from Hurricane Katrina and several other major 2005 hurricanes, GAO is supporting the Congress by assessing whether federal programs assisting the people of the Gulf region are efficient and effective and result in a strong return on investment. In order to address the demands of this work; better respond to the increasing number of demands being placed on GAO, including a dramatic increase in health care mandates; and address supply and demand imbalances in our ability to respond to congressional interest in areas such as disaster assistance, homeland security, the global war on terrorism, health care, and forensic auditing, we are seeking your support to provide the funding to rebuild our staffing level to the levels requested in previous years. We believe that 3,267 FTEs is an optimal staffing level for GAO that would allow us to more successfully meet the needs of the Congress. In preparing this request and taking into account the effects of the fiscal year 2006 rescission, we revised our workforce plan to reduce fiscal year 2005 hiring and initiated a voluntary early retirement opportunity for staff in January 2006. These actions better support GAO’s strategic plan for serving the Congress, better align GAO’s workforce to meet mission needs, correct selected skill imbalances, and allow us to increase the number of new hires later in fiscal year 2006. Our revised hiring plan represents an aggressive hiring level that is significantly higher than in recent fiscal years, and it is the maximum number of staff we could absorb during fiscal year 2006. These actions will also position us to more fully utilize our planned FTE levels of 3,217 and 3,267 in fiscal years 2006 and 2007, respectively. Our fiscal year 2007 budget request includes approximately $502 million in direct appropriations and authority to use about $7 million in estimated revenue from rental income and reimbursable audit work. Table 1 summarizes the changes we are requesting in our fiscal year 2007 budget. Our fiscal year 2007 budget request supports three broad program areas: Human Capital, Engagement Support, and Infrastructure Operations. Consistent with our strategic goal to be a model agency, we have undertaken a number of initiatives to implement performance-based, market-oriented compensation systems; adopt best practices; benchmark service levels and costs; streamline our operations; cross-service and outsource activities; and leverage technology to increase efficiency, productivity, and results. The Human Capital Program provides the resources needed to support a diverse, highly educated, knowledge-based workforce comprising individuals with a broad array of technical and program skills and institutional memory. This workforce represents GAO’s human capital— its greatest asset—and is critical to the agency’s success in serving the Congress and the nation. Human Capital Program costs represent nearly 80 percent of our requested budget authority. To further ensure our ability to meet congressional needs, we plan to allocate approximately $17 million for Engagement Support to: conduct travel, a critical tool to accomplish our mission of following the federal dollar cross the country and throughout the world, and to ensure the quality of our work; contract for expert advice and assistance when needed to meet congressional timeframes for a particular audit or engagement; and ensure a limited presence in the Middle East to provide more timely, responsive information on U.S. activities in the area. In addition, we plan to allocate about $91 million—or about 18 percent of our total request—for Infrastructure Operations programs and initiatives to provide the critical infrastructure to support our work. These key activities include information technology, building management, knowledge services, human capital operations, and support services. In fiscal year 2005, the Congress focused its attention on a broad array of challenging issues affecting the safety, health, and well-being of Americans here and abroad, and we were able to provide the objective, fact-based information that decision makers needed to stimulate debate, change laws, and improve federal programs for the betterment of the nation. For example, as the war in Iraq continued, we examined how DOD supplied vehicles, body armor, and other materiel to the troops in the field; contributed to the debate on military compensation; and highlighted the need to improve health, vocational rehabilitation, and employment services for seriously injured soldiers transitioning from the battlefield to civilian life. We kept pace with the Congress’s information needs about ways to better protect America from terrorism by issuing products and delivering testimonies that addressed issues such as security gaps in the nation’s passport operations that threaten public safety and federal efforts needed to improve the security of checked baggage at airports and cargo containers coming through U.S. ports. We also explored the financial crisis that weakened the airline industry and the impact of this situation on the traveling public and airline employees’ pensions. We performed this work in accordance with our strategic plan for serving the Congress, consistent with our professional standards, and guided by our core values (see appendix 1). See table 2 for examples of how GAO assisted the nation in fiscal year 2005. During fiscal year 2005 we monitored our performance using 14 annual performance measures that capture the results of our work; the assistance we provided to the Congress; and our ability to attract, retain, develop, and lead a highly professional workforce (see table 3). For example, in fiscal year 2005 our work generated $39.6 billion in financial benefits, primarily from actions agencies and the Congress took in response to our recommendations. Of this amount, about $19 billion resulted from changes to laws or regulations, $12.8 billion resulted from agency actions based on our recommendations to improve services to the public, and $7.7 billion resulted from improvements to core business processes. See figure 1 for examples of our fiscal year 2005 financial benefits. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2005, we recorded a total of 1,409 other benefits. For instance, we documented 75 instances where information we provided to the Congress resulted in statutory or regulatory changes, 595 instances where federal agencies improved services to the public, and 739 instances where agencies improved core business processes or governmentwide reforms were advanced. These actions spanned the full spectrum of national issues, from ensuring the safety of commercial airline passengers to identifying abusive tax shelters. See figure 2 for additional examples of GAO’s other benefits in fiscal year 2005. One way we measure our effect on improving the government’s accountability, operations, and services is by tracking the percentage of recommendations that we made 4 years ago that have since been implemented. At the end of fiscal year 2005, 85 percent of the recommendations we made in fiscal year 2001 had been implemented, primarily by executive branch agencies. Putting these recommendations into practice will generate tangible benefits for the nation over many years. During fiscal year 2005, experts from our staff testified at 179 congressional hearings covering a wide range of complex issues (see table 4). For example, our senior executives testified on improving the security of nuclear material, federal oversight of mutual funds, and the management and control of DOD’s excess property. Over 70 of our testimonies were related to high-risk areas and programs (see table 5). Our work is reflected in this law in different ways. In our May 2004 testimony on the use of biometrics for aviation security, we reported on the need to identify how biometrics will be used to improve aviation security prior to making a decision to design, develop, and implement biometrics. Using information from our statement, the House introduced a bill on July 22, 2004, directing the Transportation Security Administration (TSA) to establish system requirements and performance standards for using biometrics, and establish processes to (1) prevent individuals from using assumed identities to enroll in a biometric system and (2) resolve errors. These provisions were later included in an overall aviation security bill and were eventually included in the Intelligence Reform and Terrorism Prevention Act of 2004, enacted in December 2004. We conducted a body of work assessing the physical screening of airport passengers and their checked baggage. We found that the installation of systems that are in line with airport baggage conveyor systems may result in financial benefits, according to TSA estimates for nine airports. We also found that the effectiveness of the advance passenger screening under the process known as Secure Flight was not certain. TSA agreed to take corrective actions in these areas, and the Congress required TSA in the Intelligence Reform and Terrorism Protection Act to prepare a plan and guidelines for installing in-line baggage screening systems, and enacted measures to promote Secure Flight’s development and implementation. We reported on the verification of identity documents for drivers’ licenses, noting that visual inspection of key documents lent itself to possible identity fraud. To demonstrate this, our investigators were able to obtain licenses in two states using counterfeit documents and the Social Security numbers of deceased persons. The Congress established federal identification standards for state drivers’ licenses and other such documents and mandated third-party verification of identity documents presented to apply for a driver’s license. We assisted the Congress in crafting major improvements to a program intended to compensate individuals who worked in DOE facilities and developed illnesses related to radiation and hazardous materials exposure. In a 2004 report, we identified features of the originally enacted program that would likely lead to inconsistent benefit outcomes for claimants, in part because the program depended on the varying state workers compensation systems to provide some benefits. We also presented several options for improving the consistency of benefit outcomes and a framework for assessing these options. When the Congress enacted the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, it revamped this energy employees’ benefit program. Among other changes, this law federalized the payment of worker compensation benefits for eligible energy contractor employees and provided a schedule of uniform benefit payments. Our work over the past several years has helped the Congress to establish and assess the impacts of the recreational fee demonstration program. Under this trial program, the Congress authorized the National Park Service, the Fish and Wildlife Service, the Bureau of Land Management, and the Forest Service to charge fees to visitors to, among other things, reduce the maintenance backlog at federal parks and historic places and protect these lands from visitor impacts. Since the program’s inception in 1996, we have identified issues that needed to be addressed to improve the program’s effectiveness that included providing (1) a more permanent source of funds to enhance stability, since the current program had to be reauthorized every 2 years; (2) the participating agencies with greater flexibility in how and where they apply fee revenues; and (3) improvements in interagency coordination in the collection and use of revenue fees to better serve visitors by making the payment of fees more convenient and equitable and reducing visitor confusion about similar or multiple fees being charged at nearby or adjacent federal recreational sites. As a result of this body of work, the Congress addressed these issues by passing the Federal Lands Recreation Enhancement Act in December 2004. This act permits federal land management agencies to continue charging fees at campgrounds, rental cabins, high-impact recreation areas, and day-use sites that have certain facilities. The act also provides for a nationally consistent interagency program, more on-the-ground improvements at recreation sites across the nation, enhanced visitor services, a new national pass for use across interagency federal recreation sites and services, and public involvement in the program. Our work is reflected in this law in different ways. At the time of our August 2003 report, the original 1999 expiration date for the franchise fund pilots operating at the Departments of Commerce, Veterans Affairs, Health and Human Services, the Interior, and the Treasury and at the Environmental Protection Agency had been extended three times. These franchise funds, authorized by the Government Management Reform Act of 1994, are part of a group of 34 intragovernmental revolving funds that were created to provide common administrative support services required by many federal agencies. For example, the Commerce Franchise Fund’s business line provides IT infrastructure support services to the agency. We concluded that increasing the period of authorization would help ease concerns of current and potential clients about franchise fund stability and might allow franchise funds to add new business lines, and we suggested that the authorizations be extended for longer periods. The Congress provided permanent authority to the Treasury franchise fund in the Consolidated Appropriations Act, 2005, passed on December 8, 2004. In 2003, we reported that most agencies could not retain the proceeds from the sale of unneeded property and this acted as a disincentive to disposing of unneeded property. We stated in our high- risk report on federal real property that it may make sense to permit agencies to retain proceeds for reinvestment in real property where a need exists. Subsequently, in the Consolidated Appropriations Act, 2005, the Congress authorized the Administrator of GSA to retain the net proceeds from the conveyance of real and related personal property. These proceeds are to be deposited into the Federal Buildings Fund and are to be used as authorized for GSA’s real property capital needs. In December 2003, we reported that 184 out of 213 Alaska Native villages are affected, to some extent, by flooding and erosion. However, these villages often have difficulty qualifying for federal assistance to combat their flooding and erosion problems. In our report, we recommended that the Denali Commission adopt a policy to guide investment decisions and project designs in villages affected by flooding and erosion. In this legislation, the Congress provided the Secretary of the Army with the authority to carry out “structural and non-structural projects for storm damage prevention and reduction, coastal erosion, and ice and glacial damage in Alaska, including relocation of affected communities and construction of replacement facilities.” To improve the federal government’s ability to collect billions of dollars of outstanding criminal debt, we recommended in a 2001 report, that the Department of Justice work with other agencies involved in criminal debt collection, including the Administrative Office of the U.S. Courts, the Department of the Treasury (Treasury), and OMB, to develop a strategic plan that would improve interagency processes and coordination with regard to criminal debt collection activities. The conference report that accompanied the Consolidated Appropriations Act, 2005, directed the Attorney General to assemble an interagency task force for the purposes of better managing, accounting for, reporting, and collecting criminal debt. Our report found that the Department of Education’s (Education) system for resolving noncompliance with the Individuals with Disabilities in Education Act is protracted. We found that resolution of noncompliance cases often takes several years, in part because Education took a year on average from the time it identified noncompliance to issue a report citing the noncompliance. We therefore recommended that Education improve its system of resolving noncompliance by shortening the amount of time it takes to issue a report of noncompliance and by tracking changes in response times under the new monitoring process. In response to our recommendation, Education has instituted an improved process for managing and tracking the various phases of the monitoring process, which includes the creation of a database to facilitate this tracking. This new tracking system will enable Education to better monitor the status of existing noncompliance, and thus enable the department to take appropriate action when states fail to come into compliance in a timely manner. In 2004, we found that the 24-hour 1-800-MEDICARE help line, operated by the Centers for Medicare & Medicaid Services (CMS), did not answer 10 percent of the calls we placed to test its accuracy, often because it automatically transferred some calls to claims administration contractors that were not open for business at the time of the call. This call transfer process prohibited callers from accessing information during nonbusiness hours, even though 1-800-MEDICARE operates 24 hours a day. As a result, we recommended that CMS revise the routing procedures of 1-800-MEDICARE to ensure that calls are not transferred or referred to claims administration contractors’ help lines during nonbusiness hours. In response, CMS finished converting its call routing procedures. As a result, calls placed after normal business hours will be routed to the main 1-800-MEDICARE help line for assistance. United States Department of Agriculture scientists at the Plum Island Animal Disease Center research contagious animal diseases that have been found in other countries. The mission of the facility, now administered by DHS, is to develop strategies for protecting the nation’s animal industries and exports from these foreign animal diseases. In our September 2003 report, Combating Bioterrorism: Actions Needed to Improve Security at Plum Island Animal Disease Center, we made several recommendations to improve security at the facility and reduce vulnerability to terrorist attacks. Among other things, we recommended that the Secretary of Homeland Security, in consultation with the Secretary of Agriculture, enhance incident response capability by increasing the size of the guard force. DHS has informed us that this has been completed. According to the Director of Plum Island, DHS has more than doubled the number of guards assigned on each shift on Plum Island. DOD spending on service contracts approaches $100 billion annually, but DOD’s management of services procurement is inefficient and ineffective and the dollars are not always well spent. Many private companies have changed management practices based on analyzing spending patterns and coordinating procurement efforts in order to achieve major savings. We recommended that DOD adopt the effective spend analysis processes used by these leading companies and use technology to automate spend analysis to make it repeatable. In response, DOD is developing new technology to do that. According to DOD and contractor project managers, one phase of the project was completed in December 2004. In March 2005, DOD approved a business case analysis to seek follow-on funding for developing a DOD-wide spend analysis system. As part of our audit of Air Force purchase card controls, we identified transactions that Air Force officials acknowledged to be fraudulent as well as potentially fraudulent transactions that the Air Force had not identified. To improve Air Force oversight of purchase card activity and facilitate the identification of systemic weaknesses and deficiencies in existing internal control and the development of additional control activities, we recommended that the Air Force establish an agencywide database of known purchase card fraud cases. In lieu of establishing a separate agencywide database, during fiscal year 2003, the Air Force Office of Special Investigations initiated quarterly reporting on its purchase card investigations to the DOD IG for macro-level analysis of systemic weaknesses in the program. Our ongoing collaboration with the DOD IG on DOD’s purchase card program confirmed that the Air Force’s Office of Special Investigations is working effectively with DOD’s IG on data-mining techniques for detection of potentially improper and fraudulent purchase card transactions. As a result of our work, the Air Force has taken action to reduce the financial risk associated with undetected fraud and abuse in its purchase card program. For the 2000 Census, the United States Census Bureau (Bureau) printed material used to train census workers only in English, except in Puerto Rico where training materials were available in Spanish. However, to better prepare census workers—some of whom speak Spanish as their first language—to locate migrant farm workers and other hard-to-count groups, we recommended that the Bureau consider providing training materials in languages other than English to targeted areas. In response to our recommendation, the Bureau is researching foreign-language data collection methods as part of its preparations for the 2006 Census test and, more generally, plans to identify areas and operations that will require in-language training materials for areas with very large, new migrant populations where it will not be possible to hire bilinguals. Moreover, the Bureau’s June 2005 request for proposals for a Field Data Collection Automation System includes a requirement for the contractor to provide training applications and materials in English and Spanish for the handheld computers enumerators are to use to count nonrespondents. Issued to coincide with the start of each new Congress, our high-risk update, first used in 1993, has helped Members of the Congress who are responsible for oversight and executive branch officials who are accountable for performance. Our high-risk program focuses on major government programs and operations that need urgent attention or transformation to ensure that our government functions in the most economical, efficient, and effective manner possible. Overall, our high-risk program has served to identify and help resolve a range of serious weaknesses that involve substantial resources and provide critical services to the public. Table 5 details our 2005 high-risk list. We are grateful for the Congress’s continued support of our joint effort to improve government and for providing the resources that allow us to be a world-class professional services organization. We are proud of the positive impact we have been able to affect in government over the past year and believe an investment in GAO will continue to yield substantial returns for the Congress and the American people. Our nation will continue to face significant challenges in the years ahead. GAO’s expertise and involvement in virtually every facet of government positions us to provide the Congress with the timely, objective, and reliable information it needs to discharge its constitutional responsibilities. This concludes my statement. I would be pleased to answer any questions the Members of the Committee may have.
We are pleased to appear before the Congress today in support of the fiscal year 2007 budget request for the U.S. Government Accountability Office (GAO). This request will help us continue our support of the Congress in meeting its constitutional responsibilities and will help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Budget constraints in the federal government grew tighter in fiscal years 2005 and 2006. In developing our fiscal year 2007 budget, we considered those constraints consistent with GAO's and Congress's desire to "lead by example." In fiscal year 2007, we are requesting budget authority of $509.4 million, a reasonable 5 percent increase over our fiscal year 2006 revised funding level. In the event Congress acts to hold federal pay increases to 2.2 percent, our requested increase will drop to below 5 percent. This request will allow us to continue making improvements in productivity, maintain our progress in technology and other transformation areas, and support a full-time equivalent (FTE) staffing level of 3,267. This represents an increase of 50 FTEs over our planned fiscal year 2006 staffing level and will allow us to rebuild our workforce to a level that will position us to better respond to increasing supply and demand imbalances in areas such as disaster assistance, the global war on terrorism, homeland security, forensic auditing, and health care. This testimony focuses on our budget request for fiscal year 2007 to support the Congress and serve the American people and on our performance and results with the funding you provided us in fiscal year 2005. Our fiscal year 2007 budget request will provide us the resources necessary to achieve our performance goals in support of the Congress and the American people. This request will allow GAO to improve productivity and maintain progress in technology and other transformation areas. We continue to streamline GAO, modernize our policies and practices, and leverage technology so that we can achieve our mission more effectively and efficiently. These continuing efforts allow us to enhance our performance without significant increases in funding. Our fiscal year 2007 budget request represents a modest increase of about $25 million (or 5 percent) over our fiscal year 2006 revised funding level--primarily to cover uncontrollable mandatory pay and price level increases. This request reflects a reduction of nearly $5.4 million in nonrecurring fiscal year 2006 costs used to offset the fiscal year 2007 increase. This request also includes about $7 million in one-time fiscal year 2007 costs, which will not recur in fiscal year 2008, to upgrade our business systems and processes. As the Congress addresses the devastation in the Gulf Coast region from Hurricane Katrina and several other major 2005 hurricanes, GAO is supporting the Congress by assessing whether federal programs assisting the people of the Gulf region are efficient and effective and result in a strong return on investment. In order to address the demands of this work; better respond to the increasing number of demands being placed on GAO, including a dramatic increase in health care mandates; and address supply and demand imbalances in our ability to respond to congressional interest in areas such as disaster assistance, homeland security, the global war on terrorism, health care, and forensic auditing, we are seeking Congress's support to provide the funding to rebuild our staffing level to the levels requested in previous years. We believe that 3,267 FTEs is an optimal staffing level for GAO that would allow us to more successfully meet the needs of the Congress. In preparing this request and taking into account the effects of the fiscal year 2006 rescission, we revised our workforce plan to reduce fiscal year 2005 hiring and initiated a voluntary early retirement opportunity for staff in January 2006. These actions better support GAO's strategic plan for serving the Congress, better align GAO's workforce to meet mission needs, correct selected skill imbalances, and allow us to increase the number of new hires later in fiscal year 2006. Our revised hiring plan represents an aggressive hiring level that is significantly higher than in recent fiscal years, and it is the maximum number of staff we could absorb during fiscal year 2006. These actions will also position us to more fully utilize our planned FTE levels of 3,217 and 3,267 in fiscal years 2006 and 2007, respectively.
As defined in a report that we issued in May 2004, data mining is the application of database technology and techniques—such as statistical analysis and modeling—to uncover hidden patterns and subtle relationships in data and to infer rules that allow for the prediction of future results. This definition is based on the most commonly used terms found in a survey of the technical literature. Data mining has been used successfully for a number of years in the private and public sectors in a broad range of applications. In the private sector, these applications include customer relationship management, market research, retail and supply chain analysis, medical analysis and diagnostics, financial analysis, and fraud detection. In the government, data mining has been used to detect financial fraud and abuse. For example, we used data mining to identify fraud and abuse in expedited assistance and other disbursements to Hurricane Katrina victims. Although the characteristics of data mining efforts can vary greatly, data mining generally incorporates three processes: data input, data analysis, and results output. In data input, data are collected in a central data “warehouse,” validated, and formatted for use in data mining. In the data analysis phase, data are typically queried to find records that match topics of interest. The two most common types of queries are pattern-based queries and subject-based queries: Pattern-based queries search for data elements that match or depart from a predetermined pattern (e.g., unusual claim patterns in an insurance program). Subject-based queries search for any available information on a predetermined subject using a specific identifier. This could be personal information such as an individual identifier (e.g., an individual’s name or Social Security number) or an identifier for a specific object or location. For example, the Navy uses subject-based data mining to identify trends in the failure rate of parts used in its ships. The data analysis phase can be iterative, with the results of one query being used to refine criteria for a subsequent query. The output phase can produce results in printed or electronic format. These reports can be accessed by agency personnel and can also be shared with personnel from other agencies. Figure 1 depicts a generic data mining process. In recent years, data mining has emerged as a prevalent government mechanism for processing and analyzing large amounts of data. In our May 2004 report, we noted that 52 agencies were using or were planning to use data mining in 199 cases, of which 68 were planned, and 131 were operational. Additionally, following the terrorist attacks of September 11, 2001, data mining has been used increasingly as a tool to help detect terrorist threats through the collection and analysis of public and private sector data. This may include tracking terrorist activities, including money transfers and communications, and tracking terrorists themselves through travel and immigration records. According to an August 2006 DHS Office of Inspector General survey of departmental data mining initiatives, DHS is using or developing 12 data mining programs, 9 of which are fully operational and 3 of which are still under development. One such effort is the ADVISE technology program. Managed by the DHS Science and Technology Directorate, the ADVISE program is primarily responsible for (1) continuing to develop the ADVISE data mining tool and (2) promoting and supporting its implementation throughout DHS. According to program officials, it has spent approximately $40 million to develop the tool since 2003. To promote the possible implementation of the tool within DHS component organizations, program officials have made demonstrations (using unclassified data) to interested officials, highlighting the tool’s planned capabilities and expected benefits. Program officials have established working relationships with component organizations that are considering adopting the tool, including detailing them staff (typically contractor-provided) to assist in the setup and customization of their ADVISE implementation and providing training for the analysts who are to use it. Program officials project that implementation of the tool at a component organization should generally consist of six main phases and take approximately 12 to 18 months to complete. The six phases are as follows: preparing infrastructure and installing hardware and software; modeling information sources and loading data; verifying and validating that loaded data are accurate and accessible; training and familiarizing analysts and assisting in the development of initial research activities using visualization tools; supporting analysts in identifying the best ways to use ADVISE for their problems, obtaining data, and developing ideas for further improvements; and turning over deployment to the component organizations to maintain the system and its associated data feeds. The program has also provided initial funding for the setup, customization, and pilot testing of implementations within components, under the assumption that when an implementation achieves operational status, the respective component will take over operations and maintenance costs. Program officials estimate that the tool’s operations and maintenance costs will be approximately $100,000 per year, per analyst. The program has also offered additional support to components implementing the tool, such as helping them develop privacy compliance documentation. According to DHS officials, the program has spent $12.15 million of its $40 million in support of several pilot projects and test implementations throughout the department. Currently, the department’s Interagency Center for Applied Homeland Security Technologies (ICAHST) group within the Science and Technology Directorate is testing the tool’s effectiveness, adequacy, and cost- effectiveness as a data mining technology. ICAHST has completed preliminary testing of basic functionality and is currently in the process of testing the system’s effectiveness, using mock data to test how well ADVISE identifies specified patterns of interest. The impact of computer systems on the ability of organizations to protect personal information was recognized as early as 1973, when a federal advisory committee on automated personal data systems observed that “The computer enables organizations to enlarge their data processing capacity substantially, while greatly facilitating access to recorded data, both within organizations and across boundaries that separate them.” In addition, the committee concluded that “The net effect of computerization is that it is becoming much easier for record-keeping systems to affect people than for people to affect record-keeping systems.” In May 2004, we reported that mining government and private databases containing personal information creates a range of privacy concerns. Through data mining, agencies can quickly and efficiently obtain information on individuals or groups by searching large databases containing personal information aggregated from public and private records. Information can be developed about a specific individual or a group of individuals whose behavior or characteristics fit a specific pattern. The ease with which organizations can use automated systems to gather and analyze large amounts of previously isolated information raises concerns about the impact on personal privacy. Further, we reported in August 2005 that although agencies responsible for certain data mining efforts took many of the key steps required by federal law and executive branch guidance for the protection of personal information, none followed all key procedures. Specifically, while three of the four agencies we reviewed had prepared privacy impact assessments (PIA)—assessments of privacy risks associated with information technology used to process personal information—for their data mining systems, none of them had completed a PIA that adequately addressed all applicable statutory requirements. We recommended that four agencies complete or revise PIAs for their systems to fully comply with applicable guidance. As of December 2006, three of the four agencies reported that they had taken action to complete or revise their PIAs. Federal law includes a number of separate statutes that provide privacy protections for information used for specific purposes or maintained by specific types of entities. The major requirements for the protection of personal privacy by federal agencies come from two laws, the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. The Office of Management and Budget (OMB) is tasked with providing guidance to agencies on how to implement the provisions of both laws and has done so, beginning with guidance on the Privacy Act, issued in 1975. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes a “record” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier. It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public through a “system of records notice”: that is, a notice in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” uses of data, and procedures that individuals can use to review and correct personal information. In addition, the act requires agencies to publish in the Federal Register notice of any new or intended use of the information in the system, and provide an opportunity for interested persons to submit written data, views, or arguments to the agency. Several provisions of the act require agencies to define and limit themselves to specific predefined purposes. For example, the act requires that to the greatest extent practicable, personal information should be collected directly from the subject individual when it may affect an individual’s rights or benefits under a federal program. The act also requires that an agency inform individuals whom it asks to supply information of (1) the authority for soliciting the information and whether disclosure of such information is mandatory or voluntary; (2) the principal purposes for which the information is intended to be used; (3) the routine uses that may be made of the information; and (4) the effects on the individual, if any, of not providing the information. In addition, the act requires that each agency that maintains a system of records store only such information about an individual as is relevant and necessary to accomplish a purpose of the agency. Agencies are allowed to claim exemptions from some of the provisions of the Privacy Act if the records are used for certain purposes. For example, records compiled for criminal law enforcement purposes can be exempt from a number of provisions, including (1) the requirement to notify individuals of the purposes and uses of the information at the time of collection and (2) the requirement to ensure the accuracy, relevance, timeliness, and completeness of records. In general, the exemptions for law enforcement purposes are intended to prevent the disclosure of information collected as part of an ongoing investigation that could impair the investigation or allow those under investigation to change their behavior or take other actions to escape prosecution. …information is handled: (i) to ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (ii) to determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (iii) to examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Agencies must conduct PIAs before (1) developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form or (2) initiating any new data collections involving personal information that will be collected, maintained, or disseminated using information technology if the same questions are asked of 10 or more people. OMB guidance also requires agencies to conduct PIAs in two specific types of situations: (1) when, as a result of the adoption or alteration of business processes, government databases holding information in personally identifiable form are merged, centralized, matched with other databases, or otherwise significantly manipulated and (2) when agencies work together on shared functions involving significant new uses or exchanges of information in personally identifiable form. DHS has also developed its own guidance requiring PIAs to be performed when one of its offices is developing or procuring any new technologies or systems, including classified systems, that handle or collect personally identifiable information. It also requires that PIAs be performed before pilot tests are begun for these systems or when significant modifications are made to them. Furthermore, DHS has prescribed detailed requirements for PIAs. For example, PIAs must describe all uses of the information, and whether the system analyzes data in order to identify previously unknown patterns or areas of note or concern. The Privacy Act of 1974 is largely based on a set of internationally recognized principles for protecting the privacy and security of personal information known as the Fair Information Practices. A U.S. government advisory committee first proposed the practices in 1973 to address what it termed a poor level of protection afforded to privacy under contemporary law. The Organization for Economic Cooperation and Development (OECD) developed a revised version of the Fair Information Practices in 1980 that has, with some variation, formed the basis of privacy laws and related policies in many countries, including the United States, Germany, Sweden, Australia, New Zealand, and the European Union. The eight principles of the OECD Fair Information Practices are shown in table 1. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Ways to strike that balance vary among countries and according to the type of information under consideration. ADVISE is a data mining tool under development that is intended to facilitate the analysis of large amounts of data. It is designed to accommodate both structured data (such as information in a database) and unstructured data (such as e-mail texts, reports, and news articles) and to allow an analyst to search for patterns in data, including relationships among entities (such as people, organizations, and events) and to produce visual representations of these patterns, referred to as semantic graphs. Although none are fully operational, DHS’s planned uses of this tool include implementations at several departmental components, including Immigration and Customs Enforcement and other components. DHS is also considering further deployments of ADVISE. The intended benefit of the ADVISE tool is to help detect activities that threaten the United States by facilitating the analysis of large amounts of data that otherwise would be prohibitively difficult to review. DHS is currently in the process of testing the tool’s effectiveness. ADVISE provides several capabilities that help to find and track relationships in data. These include graphically displaying the results of searches and providing automated alerts when predefined patterns of interest emerge in the data. The tool consists of three main elements—the Information Layer, Knowledge Layer, and Application Layer (depicted in fig. 2). At the Information Layer, disparate data are brought into the tool from various sources. These data sources can be both structured (such as computerized databases and watch lists) and unstructured (such as news feeds and text reports). For structured data, ADVISE contains software applications that load the data into the Information Layer and format it to conform to a specific predefined data structure, known as an ontology. Generally speaking, ontologies define entities (such as a person or place), attributes (such as name and address), and the relationships among them. For unstructured data, ADVISE includes several tools that extract information about entities and attributes. As with structured data, the output of these analyses is formatted and structured according to an ontology. Tagging information as specific entities and attributes is more difficult with unstructured data, and ADVISE includes tools that allow analysts to manually identify entities, attributes, and relationships among them. According to DHS officials, research is continuing on developing efficient and effective mechanisms for inputting different forms of unstructured data. ADVISE can also include information about the data—known as “metadata”—such as the time period to which the data pertain and whether the data refer to a U.S. person. ADVISE metadata also include confidence attributes, ranging from 1 to –1, which represent subjective assessments of the accuracy of the data. Each data source has a predefined confidence attribute. Analysts can change the confidence attribute of specific data, but changes to confidence levels are tracked and linked to the analysts making the changes. At the Knowledge Layer, facts and relationships from the Information Layer are consolidated into a large-scale semantic graph and various subgraphs. Semantic graphing is a data modeling technique that uses a combination of “nodes,” representing specific entities, and connecting lines, representing the relationships among them. Because they are well- suited to representing data relationships and linkages, semantic graphs have emerged as a key technology for consolidating and organizing disparate data. Figure 3 represents the format that a typical semantic graph could take. The Knowledge Layer contains the semantic graph of all facts reported through the Information Layer interface and organized according to the ontology. The Knowledge Layer also includes the capability to provide automatic alerts to analysts when patterns of interest (or partial patterns) are matched by new incoming information. At the Application Layer, analysts are able to interact with the data that reside in the Knowledge Layer. The Application Layer contains tools that allow analysts to perform both pattern-based and subject-based queries and to search for data that match a specific pattern, as well as data that are connected with a specific entity. For example, analysts could search for all of the individuals who have traveled to a certain destination within a given period of time, or they could search for all information connected with a particular person, place, or organization. The resulting output of these searches is then graphically displayed via semantic graphs. ADVISE’s Application Layer also provides several other capabilities that allow for the further examination and adjustment of its output. An analyst can pinpoint nodes on a semantic graph to view and examine additional information related to them, including the source from which the information and relationships are derived, the data source’s confidence level, and whether the data pertain to U.S. persons. The ADVISE Application Layer also provides analysts the ability to monitor patterns of interest in the data. Science and Technology Directorate staff work with component staff to define patterns of interest and build an inventory of automated searches. These patterns are continuously being monitored in the data, and an alert is provided whenever there is a match. For example, an analyst could define a pattern of interest as “all individuals traveling from the United States to the Middle East in the next 6 months” and have the ADVISE tool provide an alert whenever this pattern emerges in the data. The current planned uses of the ADVISE tool include implementations at several DHS components that are planning to use it in a variety of homeland security applications to further their respective organizational missions. Currently none of these implementations is fully operational or widely accessible to DHS analysts. Rather, they are all still in various phases of systems development. These applications are expected to use the tool primarily to help analysts detect threats to the United States, such as identifying activities and/or individuals that could be associated with terrorism. The intended benefit of the ADVISE tool is to consolidate large amounts of structured and unstructured data and permit their analysis and visualization. The tool could thus assist analysts to identify and monitor patterns of interest that could be further investigated and might otherwise have been missed. None of the DHS components have fully implemented the tool in operational systems and, as discussed earlier, testing of the tool is still under way. Until such testing is complete and component implementations are fully operational, the intended benefit remains largely potential. Use of the ADVISE tool raises a number of privacy concerns. DHS has added security controls to the ADVISE tool, including access restrictions, authentication procedures, and security auditing capability. However, it has not assessed privacy risks. Privacy risks that could apply to ADVISE include the potential for erroneous association of individuals with crime or terrorism through data that are not accurate for that purpose, the misidentification of individuals with similar names, and the use of data that were collected for other purposes. A PIA would determine the privacy risks associated with ADVISE and help officials determine what specific controls are needed to mitigate those risks. Although department officials believe a PIA is not needed given that the ADVISE tool itself does not contain personal data, the E-Government Act of 2002 and related federal guidance require the completion of PIAs from the early stages of development. Further, if a PIA were conducted and privacy risks identified, a number of controls exist that could be built into the tool to mitigate those risks. For example, controls could be implemented to ensure that personal information is used only for a specified purpose or compatible purposes, or they could provide the capability to distinguish among individuals that have similar names (a process known as disambiguation) to address the risk of misidentification. Because privacy risks such as these have not been assessed and decisions about mitigating controls have not been made, DHS faces the likelihood that system implementations based on the tool may require costly and potentially duplicative retrofitting at a later date to add the needed controls. Like other data mining applications, the use of the ADVISE tool in conjunction with personal information raises concerns about a number of privacy risks that could potentially have an adverse impact on individuals. As the DHS Privacy Office’s July 2006 report on data mining activities notes, “privacy and civil liberties issues potentially arise in every phase of the data mining process.” Potential privacy risks can be categorized in relation to the Fair Information Practices, which, as discussed earlier, form the basis for privacy laws such as the Privacy Act. For example, the potential for personal information to be improperly accessed or disclosed relates to the security safeguards principle, which states that personal information should be protected against risks such as loss or unauthorized access, destruction, use, modification, or disclosure. Further, the potential for individuals to be misidentified or erroneously associated with inappropriate activities is inconsistent with the data quality principle that personal data should be accurate, complete, and current, as needed for a given purpose. Similarly, the risk that information could be used beyond the scope originally specified is based on the purpose specification and use limitation principles, which state that, among other things, personal information should only be collected and used for a specific purpose and that such use should be limited to the specified purpose and compatible purposes. Like other data mining applications, the ADVISE tool could misidentify or erroneously associate an individual with undesirable activity such as fraud, crime, or terrorism—a result known as a false positive. False positives may be the result of poor data quality, or they could result from the inability of the system to distinguish among individuals with similar names. Data quality, the principle that data should be accurate, current, and complete as needed for a given purpose, could be particularly difficult to ensure with regard to ADVISE because the tool brings together multiple, disparate data sources, some of which may be more accurate for the analytical purpose at hand than others. If data being analyzed by the tool were never intended for such a purpose or are not accurate for that purpose, then conclusions drawn from such an analysis would also be erroneous. Another privacy risk is the potential for use of the tool to extend beyond the scope of what it was originally designed to address, a phenomenon commonly referred to as function or mission “creep.” Because it can facilitate a broad range of potential queries and analyses and aggregate large quantities of previously isolated pieces of information, ADVISE could produce aggregated, organized information that organizations could be tempted to use for purposes beyond that which was originally specified when the information was collected. The risks associated with mission creep are relevant to the purpose specification and use limitation principles. To address security, DHS has included several types of controls in ADVISE. These include authentication procedures, access controls, and security auditing capability. For example, an analyst must provide a valid user name and password in order to gain access to the tool. Further, upon gaining access, only users with appropriate security clearances may view sensitive data sets. Each service requested by a user—such as issuing a query or retrieving a document—is checked against the user’s credentials and access authorization before it is provided. In addition, these user requests and the tool’s responses to them are all recorded in an audit log. While inclusion of controls such as these is a key step in guarding against unauthorized access, use, disclosure, or modification, such controls alone do not address the full range of potential privacy risks. The need to evaluate such risks early in the development of information technology is consistently reflected in both law (the E-Government Act of 2002) and related federal guidance. The E-Government Act requires that a PIA be performed before an agency develops or procures information technology that collects, maintains, or disseminates information in a personally identifiable form. Further, both OMB and DHS PIA guidance emphasize the need to assess privacy risks from the early stages of development. However, although DHS officials are considering performing a PIA, no PIA or other privacy risk assessment has yet been conducted. The DHS Privacy Office instructed the Science and Technology Directorate that a PIA was not required because the tool alone did not contain personal data. According to the Privacy Office rationale, only specific system implementations based on ADVISE that contained personal data would likely require PIAs, and only at the time they first began to use such data. However, guidance on conducting PIAs makes it clear that they should be performed at the early stages of development. OMB’s PIA guidance requires PIAs at the IT development stage, stating that they “should address the impact the system will have on an individual’s privacy, specifically identifying and evaluating potential threats relating to elements identified [such as the nature, source, and intended uses of the information] to the extent these elements are known at the initial stages of development.” Regarding ADVISE, the tool’s intended uses include applications containing personal information. Thus the requirement to conduct a PIA from the early stages of development applies. As of November 2006, the ADVISE program office and DHS Privacy Office were in discussions regarding the possibility of conducting a privacy assessment similar to a PIA but modified to address the development of a technological tool. No final decision has yet been made on whether or how to proceed with a PIA. However, until such an assessment is performed, DHS cannot be assured that privacy risks have been identified or will be mitigated for system implementations based on the tool. A variety of privacy controls can be built into data mining software applications, including the ADVISE tool, to help mitigate risks identified in PIAs and protect the privacy of individuals whose information may be processed. DHS has recognized the importance of implementing such privacy protections when data mining applications are being developed. Specifically, in its July 2006 report, the DHS Privacy Office recommended instituting controls for data mining activities that go beyond conducting PIAs and implementing standard security controls. Such measures could be applied to the development of the ADVISE tool. Among other things, the DHS Privacy Office recommended that DHS components use data mining tools principally as investigative tools and not as a means of making automated decisions regarding individuals. The report also emphasizes that data mining should produce accurate results and recommends that DHS adopt data quality standards for data used in data mining. Further, the report recommends that data mining projects give explicit consideration to using anonymized data when personally identifiable information is involved. Although some of the report’s recommendations may apply only to operational data mining activities, many reflect system functionalities that can be addressed during technology development. Based on privacy risks identified in a PIA, controls exist that could be implemented in ADVISE to mitigate those risks. For example, controls could be implemented to enforce use limitations associated with the purpose specified when the data were originally collected. Specifically, software controls could be implemented that require an analyst to specify an allowable purpose and check that purpose against the specified purposes of the databases being accessed. Regarding data quality risks, the ADVISE tool currently does not have the capability to distinguish among individuals with similar identifying information, nor does it have a mechanism to assess the accuracy of the relationships it uncovers. To address the risk of misidentification, software could be added to the tool to distinguish among individuals that have similar names, a process known as disambiguation. Disambiguation tools have been developed for other applications. Additionally, although the ADVISE tool includes a feature that allows analysts to designate confidence levels for individual pieces of data, no mechanism has been developed to assess the confidence of relationships identified by the tool. While software specifically to determine data quality would be difficult to develop, other controls exist that could be readily used as part of a strategy for mitigating this risk. For example, anonymization could be used to minimize the exposure of personal data, and operational procedures could be developed to restrict the use of analytical results containing personal information that could have data quality concerns. To implement anonymization, the tool would need the software capability to handle anonymized data or have a built-in data anonymizer. DHS currently does not have plans to build anonymization into the ADVISE tool. Until a PIA that identifies the privacy risks of ADVISE is conducted and privacy controls to mitigate those risks are implemented, DHS faces the risk that privacy concerns will arise during implementation of systems based on ADVISE that may be more difficult to address at that stage and possibly require costly retrofitting. The ADVISE tool is intended to provide the capability to ingest large amounts of data from multiple sources and to display relationships that can be discerned within the data. Although the ADVISE tool has not yet been fully implemented and its effectiveness is still being evaluated, the chief intended benefit is to help detect activities threatening to the United States by facilitating the analysis of large amounts of data. The ADVISE tool incorporates security controls intended to protect the information it processes from unauthorized access. However, because ADVISE is intended to be used in ways that are likely to involve personal data, a range of potential privacy risks could be involved in its operational use. Thus, it is important that those risks be assessed—through a PIA—so that additional controls can be established to mitigate them. However, DHS has not yet conducted a PIA, despite the fact that the E-Government Act and related OMB and DHS guidance emphasize the need to assess privacy risks early in systems development. Although DHS officials stated that they believe a PIA is not required because the tool alone does not contain personal data, they also told us they are considering conducting a modified PIA for the tool. Until a PIA is conducted, little assurance exists that privacy risks have been rigorously considered and mitigating controls established. If controls are not addressed now, they may be more difficult and costly to retrofit at a later stage. To ensure that privacy protections are in place before DHS proceeds with implementations of systems based on ADVISE, we recommend that the Secretary of Homeland Security take the following two actions: immediately conduct a privacy impact assessment of the ADVISE tool to identify privacy risks, such as those described in this report, and implement privacy controls to mitigate potential privacy risks identified in the PIA. We received oral and written comments on a draft of this report from the DHS Departmental GAO/Office of Inspector General Liaison Office. (Written comments are reproduced in appendix II.) DHS officials generally agreed with the content of this report and described actions initiated to address our recommendations. DHS also provided technical comments, which have been incorporated in the final report as appropriate. In its comments DHS emphasized the fact that the ADVISE tool itself does not contain personal data and that each deployment of the tool will be reviewed through the department’s privacy compliance process, including, as applicable, development of a PIA and a system of records notice. DHS further stated that it is currently developing a “Privacy Technology Implementation Guide” to be used to conduct a PIA for ADVISE. Although we have not reviewed the guide, it appears to be a positive step toward developing a PIA process to address technology tools such as ADVISE. It is not clear from the department’s response whether the privacy controls identified based on applying the Privacy Technology Implementation Guide to ADVISE are to be incorporated into the tool itself. We believe that any controls identified by a PIA to mitigate privacy risks should be implemented, to the extent possible, in the tool itself. Specific development efforts that use the tool will then have these integrated controls readily available, thus reducing the potential for added costs and technical risks. The department also requested that we change the wording of our recommendation; however, we have retained the wording in our draft report because it clearly emphasizes the need to incorporate privacy controls into the ADVISE tool itself. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security and other interested congressional committees. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or send e-mail to koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to determine the following: the planned capabilities, uses, and associated benefits of the Analysis Dissemination, Visualization, Insight, and Semantic Enhancement (ADVISE) tool and whether potential privacy issues could arise from using the ADVISE tool to process personal information and how the Department of Homeland Security (DHS) has addressed any such issues. To address our first objective, we identified and analyzed the tool’s capabilities, planned uses, and associated benefits. We reviewed program documentation, including annual program execution plans, and interviewed agency officials responsible for managing and implementing the program, including officials from the DHS Science and Technology Directorate and the Lawrence Livermore and Pacific Northwest National Laboratories. We also viewed a demonstration of the tool’s semantic graphing capability. In addition, we interviewed officials at DHS components to identify their current or planned uses of ADVISE, the progress of their implementations, and the benefits they hope to gain from using the tool. These components included Immigrations and Customs Enforcement and other components. We also interviewed officials from the Interagency Center of Applied Homeland Security Technology (ICAHST), who are responsible for conducting testing of the tool’s capabilities. We also visited ICAHST at the John Hopkins Applied Physics Laboratory in Laurel, Maryland, to view a demonstration of its testing activities. We did not conduct work or review implementations of ADVISE at the DHS Office of Intelligence and Analysis. To address our second objective, we identified potential privacy concerns that could arise from using the ADVISE tool by reviewing relevant reports, including prior GAO reports and the DHS Privacy Office 2006 report on data mining. We identified and analyzed DHS actions to comply with the Privacy Act of 1974 and the E-Government Act of 2002. We interviewed technical experts within the DHS Science and Technology Directorate and personnel responsible for implementing ADVISE at DHS components to assess privacy controls included in the ADVISE tool. We also interviewed officials from the DHS Privacy Office. We performed our work from June 2006 to December 2006 in the Washington, D.C., metropolitan area. Our work was performed in accordance with generally accepted government auditing standards. In addition to the individual named above, John de Ferrari, Assistant Director; Idris Adjerid; Nabajyoti Barkakati; Barbara Collier; David Plocher; and Jamie Pressman made key contributions to this report.
The government's interest in using technology to detect terrorism and other threats has led to increased use of data mining. A technique for extracting useful information from large volumes of data, data mining offers potential benefits but also raises privacy concerns when the data include personal information. GAO was asked to review the development by the Department of Homeland Security (DHS) of a data mining tool known as ADVISE (Analysis, Dissemination, Visualization, Insight, and Semantic Enhancement). Specifically, GAO was asked to determine (1) the tool's planned capabilities, uses, and associated benefits and (2) whether potential privacy issues could arise from using it to process personal information and how DHS has addressed any such issues. GAO reviewed program documentation and discussed these issues with DHS officials. ADVISE is a data mining tool under development intended to help DHS analyze large amounts of information. It is designed to allow an analyst to search for patterns in data--such as relationships among people, organizations, and events--and to produce visual representations of these patterns, referred to as semantic graphs. None of the three planned DHS implementations of ADVISE that GAO reviewed are fully operational. (GAO did not review uses of the tool by the DHS Office of Intelligence and Analysis.) The intended benefit of the ADVISE tool is to help detect threatening activities by facilitating the analysis of large amounts of data. DHS is currently in the process of testing the tool's effectiveness. Use of the ADVISE tool raises a number of privacy concerns. DHS has added security controls to the tool; however, it has not assessed privacy risks. Privacy risks that could apply to ADVISE include the potential for erroneous association of individuals with crime or terrorism and the misidentification of individuals with similar names. A privacy impact assessment would identify specific privacy risks and help officials determine what controls are needed to mitigate those risks. ADVISE has not undergone such an assessment because DHS officials believe it is not needed given that the tool itself does not contain personal data. However, the tool's intended uses include applications involving personal data, and the E-Government Act and related guidance emphasize the need to assess privacy risks early in systems development. Further, if an assessment were conducted and privacy risks identified, a number of controls could be built into the tool to mitigate those risks. For example, controls could be implemented to ensure that personal information is used only for a specified purpose or compatible purposes, and they could provide the capability to distinguish among individuals that have similar names to address the risk of misidentification. Because privacy has not been assessed and mitigating controls have not been implemented, DHS faces the risk that ADVISE-based system implementations containing personal information may require costly and potentially duplicative retrofitting at a later date to add the needed controls.
Requirements for submitting SARs to Congress, including the timing of these reports and the types of information to be included, are established in statute. Under 10 U.S.C. § 2432, the Secretary of Defense shall submit to Congress at the end of each fiscal-year quarter a report on current major defense acquisition programs. Each SAR for the first quarter of a fiscal year (also known as the comprehensive annual SAR) shall be designed to provide to the Committee on Armed Services of the Senate and the Committee on Armed Services of the House of Representatives the information these committees need to perform their oversight functions. The comprehensive annual SAR shall be submitted within 60 days after the date on which the President’s Budget is sent to Congress for the following fiscal year. The statute also requires that the annual SAR include a full life-cycle cost analysis for each major defense acquisition program and each designated major subprogram included in the report that is in the system development and demonstration stage or has completed that stage. Further, the Secretary of Defense must ensure that this requirement is implemented in a uniform manner, to the extent practicable, throughout DOD. The term full life-cycle cost, with respect to a major weapon system, means all costs of development, procurement, military construction, and operations and support, without regard to funding source or management control. If the major weapon system has an antecedent system, a full life-cycle cost analysis for that system must also be reported. The SAR reporting requirement ceases after 90 percent of the items are delivered or 90 percent of planned expenditures under the program are made. This stage of acquisition is now called engineering and manufacturing development. DOD has issued various guidance documents that implement the statutory SAR requirements. This guidance is contained in an acquisition instruction, a guidebook on defense acquisition best practices, a draft SAR policy, and an annual memorandum on preparing SARs. DOD also has developed instructions and training for entering SAR data into the Defense Acquisition Management Information Retrieval system. According to officials, program offices rely on DOD’s implementation guidance because the services do not have their own formal SAR reporting guidance. DOD’s guidance is summarized below and discussed more fully in appendix II. DOD’s acquisition process includes a series of decision milestones as the systems enter different stages of development and production. As part of the process, the DOD component or joint program office responsible for the acquisition program is required to prepare life-cycle cost estimates, which include O&S costs, to support these decision milestones and other reviews. Key decision milestones include milestone B, which approves entry into the engineering and manufacturing development phase, and milestone C, which approves entry into the production and deployment phase, including low-rate initial production. Continuation into full-rate production occurs after the full-rate production decision review is held. In conjunction with a milestone decision, a program may be rebaselined, which means that the cost, quantity, schedule, and performance goals are changed to reflect the current status. that program offices should provide explanatory information such as the source and date of the cost estimate, assumptions underlying the estimate, the antecedent system used for comparison purposes, and an explanation of how average annual costs were calculated. DOD officials stated that programs should report the cost estimate that was developed for the latest acquisition milestone decision. According to the guidance, programs should report total estimated O&S costs and should also report average O&S costs by a unit of measure (e.g., average annual cost per squadron, average annual cost per system). DOD’s guidance states that if a program has an antecedent system,assumptions should be submitted for the antecedent system. then O&S costs and In addition to its SAR implementation guidance, DOD has issued guidance for developing weapon system O&S cost estimates, which provide the basis for the O&S cost section of each SAR. Specifically, the OSD Cost Analysis Improvement Group, now known as the Cost Assessment and Program Evaluation (CAPE) office, has established guidance for preparing and presenting life-cycle O&S cost estimates at acquisition milestone reviews. O&S cost elements, for example, are to be grouped into six major areas—unit-level personnel, unit operations, maintenance, sustaining support, continuing system improvements, and indirect support—which are further broken down into 23 subelements. In addition, we have identified federal government best practices for preparing and presenting cost estimates. These practices include tracking cost estimates over time; identifying the major cost drivers; identifying the method and process for estimating each cost element; and comparing the program-developed cost estimate to an independent cost estimate. When required, a comprehensive annual SAR is prepared for each major weapon system by the program office that is managing its acquisition. Program offices are responsible for weapon systems throughout the life cycle, to include implementing, managing, and/or overseeing their development, production, fielding, sustainment, and disposal. The reporting time frame for the annual SAR is linked to the issuance of the President’s Budget, which occurs early in the calendar year, and the cost, schedule, and performance data reported in the annual SAR should reflect this budget request. The Office of the Under Secretary of Defense for Acquisition, Technology and Logistics begins the process by sending out its annual memorandum guidance in mid-January. Program offices then enter data into the Defense Acquisition Management Information Retrieval system and submit the SARs to OSD acquisition officials, generally after some level of internal review by the program office, the Program Executive Officer, and the military service under which the program is organized. OSD officials review the SAR submissions, and officials within the Office of the Assistant Secretary of Defense (Logistics and Materiel Readiness) focus on the O&S section of the reports. OSD officials then hold a series of meetings with the services and program office representatives to discuss the SAR submissions and any recommended changes. Consistent with the statutory requirement, the final annual SAR is typically submitted to Congress in April, 60 days after the President’s Budget has been submitted in February. Program offices reporting life-cycle O&S cost estimates in the SAR were often inconsistent in their cost reporting and also did not follow best practices for presenting cost estimates. In addition, some programs did not provide any O&S cost estimates in the 2010 SAR. Further, several of the programs we reviewed in more depth reported unreliable O&S cost data. The main cause for these deficiencies was a lack of detailed SAR implementation guidance for reporting O&S costs. In addition, DOD’s process for reviewing the O&S cost sections of the SAR prior to their final submission did not provide assurance that the program offices reported costs uniformly, to the extent practicable, and that these reported costs were reliable. In the absence of improvements to the SAR guidance and to DOD’s review process, deficiencies in reporting estimated life-cycle O&S costs are likely to continue. Such deficiencies may limit visibility needed for effective oversight of long-term weapon system O&S costs during the acquisition process. The SAR statute requires that life-cycle cost reporting for major weapon systems be uniform, to the extent practicable, across the department, but we found a number of inconsistent practices in how program offices were reporting life-cycle O&S cost estimates in the SAR. Based on the SAR submissions we reviewed, program offices were inconsistent in (1) the explanatory information they included with the cost estimates, (2) the source of the cost estimate they cited as the basis for the reported costs, (3) the unit of measure they used to portray average costs, (4) the frequency with which they updated reported costs, and (5) the reporting of antecedent system costs. In addition to these inconsistencies, we found that SAR submissions also did not incorporate best practices for presenting cost estimates, such as tracking cost changes over time and identifying cost drivers. In addition, 11 systems did not provide O&S cost estimates in the 2010 SAR. Submitting more consistent cost reports and incorporating best practices for presenting cost estimates would improve visibility of estimated life- cycle O&S costs in the SAR, as decision makers will have more information with which to evaluate the reported data. For example, the inclusion of the date and the source of the reported estimate provides context about the currency of the reported costs and the level of review (that is, whether the cost estimate was prepared by the program office, by the military service, or by CAPE). Likewise, the inclusion of significant assumptions underlying the cost estimate, an explanation of changes in the cost estimate from the prior year, and information on major cost drivers provides insight into the cost challenges facing the program. In addition, showing average costs using a common unit of measure allows for easier comparison of program costs to the costs of similar commodities (such as other aircraft programs). DOD’s implementation guidance for the SAR directs programs to include explanatory information in the narrative accompanying the O&S cost estimates, such as the source and date of the cost estimate, assumptions underlying the estimate (such as operating tempo, expected reliability and maintainability of the system, maintenance concept, and manning and logistics policies), the antecedent program used for comparison purposes, and an explanation of how average costs were calculated. Although explanatory information can provide context and background for understanding reported costs, we found that the explanatory information included in the O&S narrative was often minimal. Of the 84 programs that reported O&S costs in the 2010 SAR, we found that 35 (42 percent) did not include the source of the estimate and 12 (14 percent) did not include the date of the estimate in the O&S narrative. Additionally, for the 15 programs in our sample, we found that beyond providing a few basic details such as the number of units that were to be acquired, their expected service life, and operating tempo, where applicable, the O&S narrative contained minimal explanation of reported cost estimates and the assumptions underlying these estimates, as the following examples illustrate: The program office for the Army’s High Mobility Artillery Rocket System included several assumptions, such as the number of launchers and the service life. However, instead of reporting additional O&S cost estimate assumptions (such as operating tempo and expected reliability/maintainability) in the SAR narrative, the program stated that this information was available in the service cost estimate. The Joint Mine Resistant Ambush Protected (MRAP) vehicle program office noted a few specific assumptions, such as the expected service life of the fleet and the cost per mile for replenishing spare parts. However, the remaining O&S narrative for the program generically explained that the estimate included personnel, training, facilities, vehicle and component repair, and sustainment overhauls, but provided no other specifics on these areas. Only three programs—the Army’s High Mobility Artillery Rocket System, the Air Force Joint Primary Aircraft Training System, and the Army’s Force XXI Battle Command Brigade and Below (FBCB2)— included the maintenance concept planned for that system in their O&S narratives. However, even in these cases, the explanatory information for O&S costs was very limited. None of the 15 programs included assumptions on the reliability and maintainability of the weapon system in their O&S narrative. While not required by DOD’s implementation guidance, 1 of the 15 programs in our sample included explanatory information on cost drivers in the SAR O&S narrative. The V-22’s SAR submission for 2009 provided an explanation of the significant O&S cost increase from the prior SAR in 2007. In the 2007 SAR, the total O&S costs reported were $48.8 billion (fiscal year 2005 dollars). In the 2009 SAR, the program reported that this amount had grown to $75 billion (fiscal year 2005 dollars), and that the O&S cost category showing the greatest increase was unit-level consumption. In the O&S narrative, the program office attributed the majority of the cost increase to changes in the methodology used to estimate unit-level consumption costs. Specifically, the estimate was updated with the actual costs of parts from fiscal year 2009 and with projected future cost growth for parts higher than OSD’s inflation indices. The program office also noted actions being taken to reduce unit-level consumption costs, such as changes to contracting strategy and accelerated timelines for repair capabilities. GAO-identified best practices for presenting cost estimates include identifying the largest cost elements and cost drivers, and providing enough information for informed decision making. In addition, we have previously reported that leading companies identify major drivers of O&S costs and work with manufacturers to reduce these costs. During our current review, we found that programs typically use CAPE’s O&S cost element structure in reporting O&S costs, but their presentation is limited to the six major elements (e.g., unit-level personnel, maintenance, indirect support). Since each major O&S cost element includes various costs, this information is not sufficient to identify specific cost drivers. Using lower- level cost elements, as provided for in CAPE’s cost element structure, could provide greater visibility of O&S costs for oversight by decision makers. For example, as noted in the case of the V-22 discussed above, the unit-level consumption cost element consists of a number of subelements that can provide additional insight into the discrete factors driving a change in the estimated life-cycle O&S costs for that system. Various cost estimates may be developed over the life cycle of a weapon system, and DOD officials stated that programs should report the cost estimate developed for the latest acquisition milestone decision. We found that program offices—when a source was cited—cited several different sources as the basis for their reported O&S cost information in the 2010 SAR, and they did not provide an explanation for selecting the source that was used rather than another source that may have been available. As shown in table 1, for the 84 programs that included O&S costs in the 2010 SAR, 42 (50 percent) of the programs cited a specific cost estimate as the source of reported O&S costs. These sources were either a program office cost estimate, service cost estimate, or CAPE independent cost estimate. Another 35 programs (42 percent) did not cite a source, as previously noted. The remaining 7 programs (8 percent) cited a source other than a specific cost estimate. Five programs in the “other” category in table 1 referred to cost estimates but did not provide enough detail to determine what type of cost estimate was used. For example, one program cited a “validated cost estimate” without additional specificity about this estimate. Similarly, four programs stated only that the source of their SAR O&S costs was a cost estimate prepared for an acquisition decision, but they did not provide additional information to identify a specific cost estimate. The remaining two programs in the “other” category in table 1, both of which were included in our sample, cited a source other than a cost estimate. One of these programs, the Navy Multiband Terminal, reported total costs from the milestone C acquisition program baseline, despite the existence of a service cost estimate prepared for the acquisition decision in July 2010. The other program, the Air Force’s Navstar Global Positioning System (GPS), reported using current and future funding data instead of a cost estimate. Overall, six programs in our sample were among those that did not cite the source of the estimate used to report O&S costs. When we asked these six programs what source was used, five stated that the O&S cost estimate data in the 2010 SAR were derived from program office cost estimates, and the remaining program office stated that the source was a CAPE independent cost estimate. The other nine programs in our sample had cited a source, with five citing a program office cost estimate; two citing a service cost estimate; and two, as noted above, citing either an acquisition program baseline or funding data as the source of their O&S costs. As shown in table 1, some programs cited a CAPE independent cost estimate as the source of the O&S costs reported in the 2010 SAR. However, we found that one program in our sample, the LHA 6 America Class, cited a program office cost estimate even though CAPE had developed an independent cost estimate. Further, while not required, the program did not mention in the SAR that an independent cost estimate had been developed. Since 2005 the LHA 6 America Class program has reported total O&S costs of $4.45 billion (fiscal year 2006 dollars) in its SAR submissions, reflecting a 2005 program office cost estimate. However, CAPE’s 2006 independent cost estimate of the program’s O&S costs was about $300 million (7 percent) higher. According to a CAPE memorandum, this higher estimate was also not adjusted for cost growth above inflation. CAPE noted that O&S costs for the LHA 1, an antecedent system, had increased 4 percent annually since 1990 due to increased mission personnel and overhaul costs. According to CAPE, adjusting for this same rate of cost growth above inflation in its LHA 6 estimate would result in an additional $530 million throughout the system’s life cycle, or total O&S costs of $5.29 billion. Additionally, F-35 officials told us that they plan to continue using the program office’s cost estimate to report O&S costs in the SAR although CAPE is preparing an independent cost estimate for the program’s next acquisition milestone. GAO-identified best practices for presenting cost estimates include providing a comparison of the program estimate to an independent cost estimate, with an explanation of results and differences. Such a comparison is beneficial because an independent cost estimate should provide an objective and unbiased assessment of expected program costs that tests the program’s estimate for reasonableness. History has shown a pattern of higher, more accurate cost estimates the further away from the program office the independent cost estimate is prepared. In the 2009 Weapon Systems Acquisition Reform Act, Congress placed greater emphasis on independent review of program cost estimates by requiring that CAPE review cost estimates prepared in connection with all major weapon systems, and conduct independent cost estimates for certain systems prior to the milestone A, milestone B, low-rate initial production, and full-rate production acquisition decisions. Prior to the Act, CAPE was required to conduct independent cost estimates for some programs, but was not required to review cost estimates prepared for all major weapon systems. DOD’s implementation guidance for the SAR states that programs should report average O&S costs in a unit of measure determined by the military service under which the system’s acquisition is being managed. Programs are to report these average costs using CAPE’s major cost elements. We found that several program offices had changed the unit of measure they reported in the SAR from that used in previous SARs. In addition, we found that the units of measure that were being reported varied, particularly among aircraft programs. These inconsistencies make it difficult to compare a program’s current and prior-year costs, or to compare costs of similar programs. Of the 84 programs that reported O&S costs in the 2010 SAR, 5 (6 percent) changed the average unit of measure reported from that used the prior year. Specifically, two aircraft programs went from reporting costs per squadron in the 2009 SAR to reporting costs per aircraft in their 2010 SAR, a missile program went from reporting costs per unit in the 2009 SAR to reporting total program costs in the 2010 SAR, and two programs for communications systems went from reporting total program costs in the 2009 SAR to reporting costs per radio in the 2010 SAR. These last two programs— Joint Tactical Radio System (JTRS) Ground Mobile Radios and JTRS Handheld, Manpack, and Small Form Fit—were included in our sample. When we asked why they changed the unit of measure, program officials responded that the decision was made based on feedback they received from OSD when their 2010 SAR submissions were undergoing review. Of the 5 programs, only the two aircraft programs disclosed in the SAR that the unit of measure for that system had changed from the prior year. These two programs reported that they changed the unit of measure in order to standardize the calculation and increase the comparability of programs within the same major command. Also, based on analysis of the 84 systems, we found the most variation in the unit of measure among aircraft systems. Different programs reported the average cost per flying hour or the average annual cost per aircraft, per squadron of aircraft, or per the entire fleet. This issue was also evident among the programs in our sample that we analyzed in more depth. For example, the F-35 program reported average cost per flying hour, the V-22 program reported average cost per aircraft, the F-22 program reported average cost per squadron, and the Joint Primary Aircraft Training System (a training aircraft) reported average cost for the whole fleet. Ship costs, in contrast, were generally reported as average cost per ship or hull, although one ship program reported average annual cost per fleet. O&S costs for ground and other types of weapon systems were usually reported as either cost per weapon system unit or total cost for all weapon system units. However, a few other metrics were reported by these programs, such as average annual cost per battalion or per brigade combat team. Although portraying average costs with a unit of measure could be useful for tracking cost changes over time, we found that it was generally not possible to identify changes in estimated O&S costs based on the information reported in a single, annual SAR, since programs do not report costs from the prior SAR. Although major weapon system programs are required to identify and reconcile changes to estimated acquisition costs from the prior SAR, and to provide an explanation for each change, this is not required for O&S costs. Even though two of our sample programs, the V-22 and the Navstar GPS, included a statement in the SAR narrative that their O&S costs had changed, it was not possible to tell how much without the prior year’s cost data. Our year-to-year comparisons of reported costs in the SARs showed that cost changes were occurring. For example, we found that the total estimated O&S costs for the JTRS Handheld, Manpack, and Small Form Fit program decreased from $25.5 billion (fiscal year 2004 dollars) in 2009 to $10.2 billion in 2010 (fiscal year 2004 dollars). This $15.3 billion decrease occurred despite an increase in acquisition quantity of about 5,000 radios, from around 216,000 to around 221,000. This change, as well as the reasons for the change, was not identified in the SAR narrative. Similarly, we found that the total estimated O&S costs for the F-35 program increased $50 billion (fiscal year 2002 dollars) from 2009 to 2010. The reason for this increase was not explained in the O&S narrative in the SAR. According to GAO-identified best practices for presenting cost estimates, cost estimates should be tracked over time. Specifically, after an estimate is updated, a comparison of the current and prior estimate should be routinely performed and documented, and the results reported to decision makers. A documented comparison allows cost estimators to see how well they are estimating and how the program is changing over time. It also allows others to track the estimates and to identify when, by how much, and why the program cost more or less than planned. Updated cost estimates can help to ensure that decision makers have the most current data available on a program. The SAR statute requires major defense acquisition programs to begin reporting when the program is approved to begin the development phase of the acquisition process at milestone B, and DOD’s implementation guidance similarly states that a SAR should first be submitted when a program is initiated, normally at milestone B, or designated as a major defense acquisition program, and also when the program is rebaselined after a major milestone decision. DOD officials stated that programs should report the cost estimate developed for the latest acquisition milestone decision. Our analysis for the 84 major weapon system programs that included O&S costs in the 2010 SAR showed that program offices were inconsistent in the frequency of their O&S cost updates between 2005 and 2010. In many cases, programs provided more frequent updates than required by DOD’s guidance, sometimes annually. However, 8 (13 percent) of the 61 programs that were included in the SAR every year during the 2005 to 2010 period did not update their O&S costs at any time during that period. In contrast, 47 programs (56 percent) of the 84 programs in the 2010 SAR reported using a cost estimate that was prepared in 2010 or 2011 as the source of their O&S costs.reporting SARs in 2009 or 2010. These included 7 programs that began Of the 15 programs in our sample, 3 did not update their SAR O&S costs during the period between 2005 and 2010, 5 updated their costs once, 5 updated their costs 2 or 3 times, and 2 updated their O&S costs 4 times during the period. For example, the Navy’s LHA 6 America Class program office has consistently reported the O&S costs estimated for milestone B, the program’s only acquisition milestone while under SAR reporting requirements, in the annual SARs since 2005. Program officials told us that they were in the process of developing a new cost estimate for the LHA 7, the next ship in the America Class, and planned to use the new estimate as the source to report O&S costs in the program’s 2011 SAR submission, if complete. Also, the Army’s FBCB2 program has not updated its O&S SAR costs and is reporting costs estimated in 2004, even though the program’s production quantity has quadrupled since then. FBCB2 program officials told us that since its full-rate production decision in 2004, the program has experienced nearly continuous changes to its production quantity requirement, resulting in a significant effort to maintain and update the acquisition portion of the cost estimates and little time to research and update the O&S portion of the cost estimates. In contrast, several of our sample programs updated their O&S costs annually. The F-35 program has updated the reported SAR O&S costs annually since 2006, the beginning of the period we reviewed. According to F-35 program officials, they chose to do this because the F-35 is a high-visibility, high-interest program. Further, estimating O&S costs annually helps inform DOD leadership and keeps partner countries updated, program officials noted. Additionally, the Joint MRAP program office has updated its SAR O&S costs annually since the program began reporting these costs in 2009 and plans to do so until the services assume responsibility for the system around 2013. Program officials said they are incorporating actual cost data from the field as it becomes available and updating O&S costs annually in order to give the services the best data once the transfer takes place. Finally, the Army’s MQ-1C program has updated its SAR O&S costs annually since 2009. Although these costs were updated in 2010 for several reasons, including an increase in the number of systems to be acquired, program officials said they do not plan to update the program’s O&S costs annually. Officials for the remaining programs in our sample, which updated their O&S costs intermittently, gave various reasons for updating their program’s SAR O&S costs when they did. While one program updated the SAR as required to reflect the O&S costs estimated for an acquisition decision, other programs in our sample chose to update the costs after they developed estimates to reflect changes to the acquisition program (e.g., changes in production quantity or schedule), to incorporate actual O&S costs that are considerably different than previously estimated, or to comply with guidance not related to the SAR. For example, the Navy’s V-22 program office updated the O&S costs in the 2009 SAR because actual O&S costs incurred after the program’s initial operational capability in 2007 for the Marine Corps and 2009 for the Air Force were significantly higher than had been anticipated in the program’s most recent cost estimate. Prior to the 2009 update, the V-22 was reporting costs based on the estimate completed for an acquisition decision in 2005. The V-22 program office, in conjunction with U.S. Naval Air Systems Command, plans to review the program’s O&S costs annually and update the SAR as necessary until the program stops reporting SARs. According to officials, the final deliveries of the V-22 are scheduled for 2020. As another example, the Joint Primary Aircraft Training System program updated O&S costs in the 2010 SAR after reporting the same costs since 2001. According to officials, an updated program office cost estimate was developed to comply with a policy from the program’s major command that cost estimates be updated annually. DOD acquisition best practices and GAO-identified cost-estimating best practices call for maintaining updated estimates of program costs. According to the Defense Acquisition Guidebook, although a DOD or service cost estimate is required at milestone reviews, it is a good practice for this estimate, or at least its underlying program office cost estimate, to be updated more frequently, usually annually. Updated estimates should be useful in program management and financial management throughout the life of the program. GAO-identified best practices call for continual updates of cost estimates to keep them relevant and current, as most programs do not remain static, especially those in development. Routine updates that incorporate actual data result in higher-quality estimates as the program matures. Further, updating the cost estimate provides an accuracy check, defense of the estimate over time, shorter estimate preparation times, and archived cost and technical data for use in future estimates. In accordance with the SAR statute, DOD’s implementation guidance states that if a program has an antecedent system, then O&S costs and assumptions should be submitted for the antecedent system. We found that program offices, however, were inconsistent in reporting on antecedent system costs, with many not reporting any O&S cost data. Specifically, 57 (68 percent) of the 84 programs reporting O&S costs in the 2010 SAR did not report O&S costs for an antecedent system. It was unclear from the SARs how program offices had identified an antecedent system or whether, in cases where no antecedent system costs were included, the program offices had determined that an antecedent system did not exist. Nine of the 15 programs in our sample did not report O&S costs for an antecedent system in the 2010 SAR. Officials from these program offices provided various reasons for not reporting antecedent system costs, including that the system was the first of its type or not intended to replace any other system, that the system had advanced capabilities or no other system was similar enough for comparison, and that the system was replacing several legacy systems. As an example, Joint MRAP program officials said other systems, such as the High Mobility Multipurpose Wheeled Vehicle, were too different for cost comparisons. As another example, the Navy Multiband Terminal program began reporting in the 2006 SAR and has never reported antecedent O&S costs. According to program officials, an antecedent system was not identified because the system was replacing several legacy weapon systems. However, during a joint OSD/Navy SAR review meeting in March 2011, the program office was instructed to list two systems as antecedent systems in the 2010 SAR. While the program identified the Super High Frequency and Navy Extremely High Frequency Satellite programs as antecedent systems in the O&S section of its SAR, it also stated that program costs for these systems were not readily available. The SAR statute requires that all program costs be reported, regardless of funding source or management control. However, we found that of the 95 major weapon systems that had passed milestone B and reported costs in the 2010 SAR, 11 (12 percent) did not identify any O&S costs in their SARs. The 11 programs, as of December 2010, accounted for a total estimated investment of $56.7 billion (fiscal year 2011 dollars) for research and development, procurement, military construction, and acquisition-related operation and maintenance (see table 2). Most of the programs that did not report O&S costs were modifications to other weapon systems but qualify as major defense acquisition programs based on their procurement or research and development costs. Eight of the programs that did not report O&S costs are major modifications to, or subsystems of, Air Force weapon systems. When we asked why O&S costs were not reported, officials from six Air Force programs said they did not report O&S costs in the 2010 SAR because they do not fund or track these costs. For example, officials for two programs associated with the C-5 aircraft explained that all O&S fleet costs are the responsibility of another entity, the System Program Manager at Warner Robins Air Logistics Center in Georgia. Program officials for the other two Air Force programs, the B-2 Radar Modernization Program and B-2 Extremely High Frequency Satellite Communications program, told us that these modification programs were expected to reduce O&S costs and they could not input cost reductions into DOD’s Defense Acquisition Management Information Retrieval system, the database that maintains SAR data. In contrast to these modification programs, the Air Force’s C-130 Avionics Modernization Program did report total estimated O&S costs in the 2010 SAR. According to officials, one of the remaining three programs—the Army’s Apache Block IIIB—was not required to report O&S costs in the SAR, as approved by the Defense Acquisition Executive. The other two programs are the Chemical Demilitarization-Assembled Chemical Weapons Alternatives, and the Chemical Demilitarization-U.S. Army Chemical Materials Agency. According to the SAR for each program, O&S costs are reported in other sections of the reports. For example, program officials told us that O&S costs for the Assembled Chemical Weapons Alternatives program are captured in research, development, test, and evaluation costs. According to program officials, the Chemical Demilitarization program is a one-of-a-kind national environmental and safety program that is unlike weapon systems that report SARs. Further, officials said that the two programs have not separately reported any O&S costs since they were designated major defense acquisition programs in 1994. SARs are intended to provide Congress with authoritative program information on the cost, schedule, and performance of major weapon systems, but we found that some programs submitted unreliable O&S cost data. More specifically, our review of SAR reports for the 15 programs in our sample identified inaccurate cost estimates and other errors in SARs submitted in 2007, 2009, and 2010. (As noted earlier, DOD did not submit SARs in 2008.) While some of the program offices told us specific reasons for the errors, others did not provide an explanation. Based on our analysis of O&S cost data reported in the SAR compared with the underlying cost estimates and other information provided by the program offices, we found that 7 of the 15 programs reported inaccurate O&S costs in one or more of the three annual SARs. The F-35 Joint Strike Fighter program office underreported the average cost per flying hour for the aircraft in the 2010 SAR. The average, steady-state O&S cost per flying hour was reported as $16,425 (fiscal year 2002 dollars). Program officials told us that the number of aircraft used in the estimate for the Air Force’s inventory was not accurate and the estimate also did not project for future cost growth above inflation. The estimate included approximately 528 extra aircraft that when calculating the average cost per flying hour, resulted in higher flight hours and lower average costs per hour. Further, according to the SAR, some of the F-35’s O&S costs were intentionally excluded from the estimate to enable comparison with the antecedent system, the F-16 C/D. Costs for support equipment replacement, modifications, and indirect costs were removed from the F-35’s cost per flying hour since they were not available for the F-16 C/D. Officials calculated that the revised cost per flying hour for the F- 35 was $23,557 (fiscal year 2002 dollars), or 43 percent higher, after including the excluded costs, projecting for future cost growth above inflation, and correcting the number of aircraft. However, they noted that the total O&S life-cycle cost reported in the SAR for the F-35 was accurate because it was calculated separately from the average cost per flying hour. The Navy Multiband Terminal program office underreported estimated life-cycle O&S costs in the 2010 SAR. The program reported $219.1 million in total O&S costs but excluded an additional $591.3 million for externally funded depot-level repairables ($148.4 million) and military personnel ($442.9 million), which were included in a 2010 service cost estimate. Therefore, only 27 percent of the program’s estimated total O&S costs were reported in the 2010 SAR. Program officials stated that these costs are not under the control of the program office and should not be reported in the SARs. However, the SAR statute states that full life-cycle costs, including O&S costs, should be reported without regard to funding source or management control. The Air Force Joint Primary Aircraft Training System program office underreported O&S costs in the 2007 and 2009 SARs, both of which were based on a 2001 service cost estimate. The program, which includes the T-6 aircraft and a ground-based training system, reported total O&S costs of $9.4 billion (fiscal year 2002 dollars) in both SARs but excluded $2.1 billion (fiscal year 2002 dollars)—or about 18 percent—of O&S costs for the program’s ground-based training system. Program officials have reported the same O&S costs since the annual 2002 SAR. The program, which updated its O&S estimate in 2011, included these costs in the total O&S costs reported in the 2010 SAR. The Army’s High Mobility Artillery Rocket System program office overstated O&S costs in the program’s 2007, 2009, and 2010 SARs. Although program office estimates were provided to us for the 3 years, the estimates did not match the costs reported in the SARs. The O&S costs reported in 2007 were higher than the estimate by $11.1 million (fiscal year 2003 dollars), and the $988 million (fiscal year 2003 dollars), reported in both 2009 and 2010, was higher than the estimates by about $300 million (fiscal year 2003 dollars), or about 43 percent. Program officials told us that the costs had been reported incorrectly in each year. The JTRS Handheld, Manpack, and Small Form Fit program underreported total O&S costs in the annual 2007 SAR. The SAR stated that the O&S costs had been updated, but the O&S costs were unchanged from prior annual SARs. Program officials also provided us with an estimate that matched the numbers reported in the 2007 SAR. When asked why the costs had not changed, program officials stated that while the costs for procurement and research, development, test, and evaluation were correctly updated in 2007, the O&S costs were not. They explained that the reported costs of $4.9 billion (fiscal year 2004 dollars) should have been higher by $120 million (fiscal year 2004 dollars), but they did not provide us the estimate on which that higher amount was based. The Air Force’s Navstar GPS program, as noted earlier, did not report a life-cycle cost estimate in the annual SARs from 2007 through 2010. For example, according to the 2010 SAR, the O&S costs reported were based on funding for fiscal years 2008 through 2016. Program officials confirmed that the O&S amounts reported included actual funding for the current year and funding from the Air Force’s budget system for the remaining years. However, even this amount—about $469 million (fiscal year 2000 dollars) in 2010, for example—was significantly understated. According to program officials, the amount reported in the SAR is only 60 percent of the program’s actual requirements of approximately $782 million—a difference of $313 million—and the program has historically been funded to 90 percent of requirements with supplemental funds. However, this was not noted in the SARs. The FBCB2 program underreported total O&S costs in the annual 2007 through 2010 SARs. As explained earlier, reported O&S costs were estimated for the program’s final acquisition milestone, full-rate production, in 2004. In subsequent years, however, the program’s procurement quantities increased and were about 305 percent higher in the 2010 SAR than the amount used to develop the estimate. Further, total O&S costs of $468 million (fiscal year 2005 dollars) reported in the SARs were $129 million less than the $596.2 million estimated in 2004. Officials initially indicated that some of the estimated O&S costs were likely included with the program’s acquisition costs in the SAR, but they were unable to reconcile the costs in the two documents. We also found examples of inaccuracies in other data reported in the O&S cost section of the SARs. For example, the 2010 SAR for the Joint MRAP states that the program’s O&S costs were reviewed by CAPE in 2010, but program officials and prior-year SARs stated that the review actually occurred in 2008. Further, neither CAPE nor the program office was able to provide any record of the 2008 review. As another example, the 2010 SAR for the F-22 indicates that the reported O&S costs were based on a 2004 acquisition decision estimate that was updated with analyses from 2010 to bring the estimate in line with the current approved F-22 production program and operational concepts. However, the O&S costs reported are identical to those reported in the 2009 SAR, which states it was updated based on analyses from 2009. Implementation of the GAO-identified best practices already discussed could improve the reliability of O&S costs reported in the SARs. Together, the best practices work to provide more assurance that the correct information is reported. For example, routinely updating O&S cost estimates—and related SAR data—will likely require more frequent changes to the reported cost data. Therefore, it is less probable that an error or omission will be regularly reported. In addition, as noted earlier, comparing a program’s cost estimate with an independent cost estimate, and explaining any significant differences, could help decision makers monitor the reasonableness of the reported data. Finally, tracking O&S costs over time, by presenting the current year and prior-year program cost estimates and explaining significant differences, would also help to test the reasonableness of reported costs. DOD’s reports to Congress on estimated weapon system O&S costs were often inconsistent and sometimes unreliable due to a lack of detailed implementation guidance for reporting these costs. In addition, DOD’s process for reviewing the O&S cost sections of the SAR prior to final submission did not provide assurance that the program offices reported costs uniformly, to the extent practicable, and that these reported costs were reliable. In the absence of improvements to the SAR guidance and to DOD’s review process, deficiencies in reporting estimated life-cycle O&S costs are likely to continue. DOD’s existing implementation guidance collectively provides minimal, and in some areas conflicting, instructions for O&S cost reporting. For example, the guidance does not identify which cost estimate or estimates should be used to report O&S costs when more than one estimate is available. Often multiple cost estimates are prepared by the program office, the service, and CAPE to support acquisition decisions. Further, DOD officials stated that O&S costs reported in the SAR should be updated only at acquisition milestones. Because many years may pass between these milestones, however, reported O&S costs may become outdated, no longer reflecting the status of the current acquisition program. DOD’s guidance also provides very little detail on how program offices should discuss assumptions underlying the cost estimate. DOD’s draft SAR policy, for example, only mentions several assumptions for consideration, such as operating tempo, expected reliability and maintainability of the system, the maintenance concept, and manning and logistics policies, and does not provide specific examples. In addition, the statutory SAR requirement to report all program costs, regardless of funding source or management control, is reflected in none of DOD’s SAR implementation guidance; it is reflected in training course materials on using the Defense Acquisition Management Information Retrieval system. Finally, DOD’s draft SAR policy provides conflicting instructions on cost reporting for antecedent systems. The draft policy states that antecedent costs should be reported “whenever those costs have previously been developed.” However, in the appendix, the draft guidance states that O&S costs will be reported for antecedent systems “when the replacement system is required to report O&S costs.” DOD officials could not explain the reason for this variance in the guidance. While some program offices we contacted indicated that DOD’s implementation guidance on reporting O&S costs in the SAR was sufficient, officials from several program offices in our sample indicated that more detailed guidance would be helpful when they prepare their annual SAR submissions. These officials stated that there was minimal guidance provided on what should be included in the O&S narrative and that there needed to be more consistency in SAR O&S reporting. Additionally, they explained that the current guidance is vague, unclear, open to interpretation, and does not provide useful information or examples for how programs should be reporting these costs. Officials from one program also stated that there is no direction on the comparison of program costs to the antecedent system’s costs, so the approach to making this comparison is open to interpretation. They noted that the guidance does not specify whether the program office should alter the weapon system’s O&S costs to enable a true comparison with the costs for the antecedent system, or whether the weapon system’s O&S costs should be reported without modification. Finally, while several program offices told us that the Defense Acquisition University provides useful training on acquisition reporting in general, they said that the materials should be more readily available as program representatives could not always attend the class and that the O&S section of the SAR was not covered sufficiently. The SAR data submitted by program offices are subject to multiple reviews within the military services and by OSD, but this review process has not provided assurance that O&S costs are reported consistently and reliably. Although our review did not include a full evaluation of DOD’s SAR review process, OSD officials explained that once they receive the SAR submissions, there is a relatively short amount of time to review the SAR O&S data. For example, according to the SAR review schedule, the Office of the Assistant Secretary of Defense (Logistics and Materiel Readiness) usually has about a week to review the O&S cost submissions. We also noted that “SAR review guidance” that is included with the annual memorandum on preparing SARs does not provide additional direction to the program offices on what to include in their O&S cost submissions. In some cases, the annual memorandum is less specific than the draft SAR policy. The deficiencies in DOD’s implementation guidance likely hinder the effective review of SAR O&S cost information at all levels. The department’s emphasis on weapon system O&S costs has been increasing in recent years, but the primary focus continues to be on acquisition costs. According to OSD acquisition officials, the SAR started as—and is still often viewed as—primarily an acquisition report. This perspective was reflected in comments from some program officials. For example, officials at one program office told us that, due to a constantly changing acquisition program, their time was largely spent on estimating acquisition costs. Another program office noted that the focus of the SAR statute was acquisition costs and that O&S costs will vary based on emerging needs. Several other programs indicated that O&S cost estimating was not particularly useful, as their systems had not yet entered into production or sustainment, and actual cost data were either not yet available or could not be obtained by the program office. Finally, other program offices stated that since they do not fund the support of the weapon system, the O&S cost estimates should be done by the organizations responsible for providing this funding. Without more consistent and reliable reporting of estimated weapon system O&S costs, Congress and senior DOD officials may have limited visibility of information needed to effectively oversee the full life-cycle costs associated with weapon system acquisitions. Improvements in the reporting of these data could provide a more complete picture of the potential total financial commitment being made to these systems over a period lasting many decades. SAR cost estimates are reported early during acquisition, when there is the greatest chance for managing or reducing future O&S costs. By facilitating inquiries on changes from prior cost estimates and cost drivers, such information could affect acquisition investment decisions and result in tradeoffs that otherwise might not be considered. Furthermore, improvements to SAR reporting would be consistent with a provision in the National Defense Authorization Act for Fiscal Year 2012 directing DOD to take actions aimed at better assessing, managing, and controlling weapon system O&S costs. To improve visibility over estimated life-cycle O&S costs during weapon system acquisition, we recommend that the Secretary of Defense take the following two actions. First, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to revise DOD’s guidance for implementing statutory SAR requirements. The revisions, at a minimum, should provide additional detail on the following areas: the explanatory information that should be included in the O&S narrative, including the specific assumptions underlying the cost estimate; the source to be used as the basis for reported O&S cost estimate information, especially when more than one source is available (such as a program office cost estimate, service cost estimate, and CAPE independent cost estimate); a consistent unit of measure for reporting average costs over time by commodity type—or other designated weapon system group—as agreed to by OSD and the services; criteria for identifying an antecedent system and reporting on the results of the cost comparison in the SAR; and reporting O&S costs for major modifications to existing weapon systems. In revising the guidance, the Under Secretary of Defense should incorporate best practices for preparing and presenting cost estimates, including: a comparison of current-year to prior-year O&S cost estimates; the identification of cost drivers that resulted in changes in these estimates, if significant; and the level of detail that should be reported; a comparison of the reported cost estimate with the most recent independent cost estimate, along with an explanation of any significant differences between the two estimates; and The frequency with which O&S costs reported in the SAR should be updated, including guidance on what changes in the program’s status should trigger an update. Second, we also recommend that the Secretary of Defense direct that the Under Secretary of Defense for Acquisition, Technology and Logistics, in conjunction with the Secretaries of the Army, the Air Force, and the Navy, evaluate the current review process, identify any weaknesses, and institute corrective actions as needed to provide greater assurance that estimated life-cycle O&S costs included in the SAR reports submitted by program offices consistently follow the implementation guidance, including any revisions to the guidance as described above, and report reliable cost data. As part of this evaluation, DOD should consider whether additional steps are necessary for the department to enhance the emphasis placed on reporting estimated life-cycle O&S costs in the SAR. DOD provided comments on a draft of this report. In its comments, DOD agreed with both of our recommendations. The department’s written comments are reprinted in appendix III. DOD also provided technical comments that we have incorporated into this report where appropriate. In concurring with our first recommendation to revise DOD’s guidance for implementing statutory SAR requirements, DOD noted that the focus of the SAR has always been primarily on acquisition rather than sustainment. DOD further stated that Congress, in requiring DOD to add O&S costs to the SAR report, did not intend for DOD to develop O&S costs for each submission but to report the latest available estimate for the program. Our report recognizes that the development of new O&S cost estimates is not required for each annual SAR submission. However, these costs represent a significant proportion of a system’s total costs over its life cycle. Moreover, we found that the timing of updates to the O&S costs reported in the SAR varied widely, as DOD has not identified what changes in a program's status—other than established acquisition milestones, which can be many years apart—should trigger such updates. We also continue to believe that DOD needs to clearly identify the source and date of the O&S cost estimate data reported in the SAR. Our recommendations reflect these and other weaknesses in the current reporting of O&S costs. DOD’s comments identified actions it plans to take to implement our recommendations. DOD stated that it will expand and update its current guidance for the O&S cost section of the SAR, as contained in the Defense Acquisition Guidebook. DOD plans to make revisions specifically with regard to assumptions and ground rules (e.g., the source and date of the estimate reported); a consistent unit of measure for reporting O&S costs for each type of commodity; identifying, and reporting on, antecedent systems; and reporting O&S costs for major modifications. These planned revisions to the guidance are positive steps. We plan to monitor DOD’s actions as part of our recommendation follow-up process. Regarding other revisions to the guidance that we recommend to incorporate best practices for O&S cost reporting, DOD stated that the department is not yet in a position to add a credible O&S cost variance analysis. Although DOD does not define what it means by "cost variance analysis," it is reasonable to expect that such analysis would involve comparing changes from a previous cost estimate and identifying any significant cost drivers. DOD noted that it is implementing new O&S- related requirements from the National Defense Authorization Act for Fiscal Year 2012, as well as previous requirements from the Weapon Systems Acquisition Reform Act of 2009, including requirements that deal with cost variance analysis. DOD stated that it is premature to determine to what extent DOD's implementation of these requirements will affect the reporting of O&S costs in the SAR. With these and other ongoing activities related to the management and control of O&S costs, DOD would prefer to defer these additional reporting requirements for the SAR for now. We are aware that DOD has a number of ongoing activities to improve the management and control of O&S costs and must respond to several new requirements, as stated in DOD's comments. For example, the O&S- related guidance required by the National Defense Authorization Act for Fiscal Year 2012 must be issued within 180 days from the date the Act was enacted, which was December 31, 2011. If such activities result in improved visibility of O&S costs within the department, and DOD coordinates these activities with efforts to improve O&S cost reporting in the SAR, then we agree that it may be preferable to delay implementation of the best practices we recommend in our report. However, we continue to believe that these best practices, when implemented, will provide better information on the current status and direction of long-term O&S costs and thus can improve congressional oversight of weapon system costs. Therefore, these elements of our recommendation remain valid. DOD also concurred with our second recommendation to evaluate and make any changes needed to strengthen its current process for reviewing O&S cost reporting prior to submission of SARs to Congress. In its comments, DOD cited actions it would take in the short term to improve the review of O&S costs prior to submission of SAR reports at the end of March 2012. DOD stated that the O&S cost section will be given additional emphasis during this reporting period. Subsequently, DOD will convene a joint OSD/DOD component working group that will evaluate the current SAR review process, identify any weaknesses, and institute corrective actions as needed to improve the data quality for the estimated life-cycle O&S costs reported in the SAR. We believe these actions, when implemented, will meet the intent of our recommendation. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Acting Under Secretary of Defense for Acquisition, Technology and Logistics; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (404) 679-1808 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. To determine the extent to which the selected acquisition reports (SAR) provide consistent and reliable operating and support (O&S) cost estimate information that enables effective oversight of major weapon system costs, we reviewed statutory requirements in 10 U.S.C. § 2432 for reporting weapon system life-cycle costs in the SARs, as well as Department of Defense (DOD) implementation guidance for the SAR. We also reviewed DOD guidance for preparing weapon system O&S costs and GAO-identified cost-estimating best practices to identify the scope and nature of cost estimate information needed for effective program management and oversight. We interviewed and obtained documentation from DOD and military service officials responsible for weapon system acquisition, logistics, and cost analysis to understand DOD’s approach and process for reporting O&S cost estimates in the SARs. Offices we contacted included the following: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Office of the Director, Acquisition Resources and Analysis Office of the Deputy Assistant Secretary of Defense for Materiel Office of the Director, Cost Assessment and Program Evaluation Office of the Deputy Assistant Secretary of the Army, Cost and Naval Center for Cost Analysis Air Force Cost Analysis Agency We obtained SARs for all 95 weapon systems that reported a December 2010 SAR. These reports were contained in the Defense Acquisition Management Information Retrieval system, which is a web-based system used within DOD to collect and maintain SAR information submitted by program offices. We determined that the data in this system accurately reflected information submitted by weapon system program offices and therefore were sufficiently reliable for the purposes of our analysis. After determining that a total of 84 of the 95 weapon systems included O&S costs in their December 2010 SARs, we analyzed the annual SARs that were submitted for these systems between 2005 through 2010.Specifically, we analyzed the SARs to determine the types and scope of explanatory information included in the O&S narrative accompanying the cost estimate data; the source of the O&S cost estimate cited as the basis for the reported costs; the units of measure used to present O&S costs; the frequency that O&S costs were updated from year to year; and the extent to which O&S costs for antecedent systems were reported. We compared the SARs across each of these categories to determine the extent to which information was reported consistently across all 84 weapon systems. From the population of 84 weapon systems that included O&S cost estimates in the 2010 SARs, we selected a sample of 15 weapon We designed the sample to ensure that a systems for further analysis. range of weapon systems were represented based on commodity type and service responsible for managing the program. We selected three or four weapon systems per service and at least one commodity type within each service for a total sample size of 15.distribution of weapon systems’ total costs across our sample selection in terms of both dollars and the upper and lower 50 percent of weapon systems that reported O&S costs in the 2010 SAR. We determined that the sample contained an adequate mix of high- and low-dollar weapon systems for our purposes. The results from this nonprobability sample cannot be used to make inferences about all major weapon systems because the sample may not reflect all characteristics of the population. The 15 programs in our sample are shown in table 3. This appendix provides additional information on DOD’s guidance that implements the statutory SAR requirements in 10 U.S.C. § 2432. DOD has issued various guidance documents that implement the statutory SAR requirements. DOD Instruction 5000.02, which addresses the operation of the defense acquisition system, includes guidance on SARs that is similar to the basic statutory requirements in 10 U.S.C. § 2432. The guidance, for example, states that SARs should be submitted at program initiation (normally milestone B except for some ship programs) or at the time that the program is designated as a major defense acquisition program. It reiterates that programs shall report annually, with the exception of quarterly reports that are required when acquisition costs increase or schedules slip. Further, the instruction requires the submission of quarterly SARs after the program rebaselining that occurs after a major milestone decision (i.e., milestone C or milestones B and C for some ship programs). Another source of guidance on SAR reporting is the Defense Acquisition which describes discretionary best practices for acquisition Guidebook, professionals to consider while meeting various reporting requirements throughout the acquisition process. The guidebook contains a section summarizing the statutory requirements for SAR content and submission and reiterates that a full life-cycle analysis of costs should be reported for programs, including each evolutionary increment, as available, and for antecedent programs, if applicable. states that assumptions underlying the estimate should be included. Operating tempo, expected reliability and maintainability of the system, maintenance concept, and manning and logistics policies are provided as examples of the estimate assumptions that should be included in the SAR. Finally, the draft policy states that programs should report the total estimated O&S costs, and estimate assumptions, for an antecedent system if one has been identified and these costs were previously developed for that system. Each year the Under Secretary of Defense for Acquisition, Technology and Logistics issues a memorandum to the military services that provides guidance for preparing the annual SARs, including instructions for programs that have reached milestone B and are required to report O&S costs. For fiscal years 2007, 2009, and 2010, this annual guidance states that programs should report total estimated O&S costs in both constant and then-year dollars, and that the assumptions that formed the basis of the estimate and the date of the estimate should be included. Further, programs should report an average unit of measure (e.g., average annual cost per squadron, average annual cost per system) for the O&S costs of both the current weapon system and the antecedent system in constant dollars. If there is no antecedent system, this should be stated in the narrative of the O&S cost section. If there is an antecedent system but the data are not currently available, the antecedent system should be identified in the narrative, along with a statement that the required data are not available (e.g., “the O&S costs for the antecedent system are not currently available, but will be provided in the next SAR”). Finally, programs should explain in the narrative how the average annual costs were calculated using the estimated O&S cost total. Under Secretary of Defense for Acquisition, Technology and Logistics, SAR Data Entry Instructions (draft) (Nov. 5, 2010). SAR training on using this system. According to Office of the Secretary of Defense (OSD) officials, the primary class, Acquisition Reporting for Major Defense Acquisition Programs and Major Automated Information Systems, is usually offered in January and October. During the 4-day class, participants receive step-by-step instruction on report preparation using the system’s web application. The training materials include basic SAR O&S cost reporting information. For example, estimate assumptions should be reported, calculation of average costs from total O&S costs should be provided, and costs should always be updated at major acquisition milestones. The training materials reiterate that costs should include both direct and indirect costs, regardless of funding source or management control. In addition to the contact name above, the following staff members made key contributions to this report: Tom Gosling, Assistant Director; Kristine Hassinger; Susannah Hawthorne; Charles Perdue; Janine Prybyla; William M. Solis; and Erik Wilkins-McKee.
With the nation facing fiscal challenges and the potential for tighter defense budgets, Congress and the Department of Defense (DOD) have placed more attention on controlling the billions of dollars spent annually on weapon system operating and support (O&S) costs. These costs include, costs for repair parts, maintenance, and personnel, and account for about 70 percent of the total costs of a weapon system over its life cycle. The selected acquisition report (SAR) is DOD’s key recurring status report on the cost, schedule, and performance of major defense acquisition programs and is intended to provide authoritative information for congressional oversight of these programs. Oversight of O&S costs is important because many of the key decisions affecting these life-cycle costs are made during the acquisition process. GAO reviewed weapon system O&S cost estimates that DOD submits in the SAR. Specifically, GAO determined the extent to which the SARs provide consistent and reliable O&S cost estimate information that enables effective oversight of these weapon system costs. To conduct its review, GAO analyzed SAR data for 84 major systems that submitted O&S cost estimates in the 2010 SAR and selected a nonprobability sample of 15 systems for more in-depth review. DOD’s reports to Congress on estimated weapon system O&S costs are often inconsistent and sometimes unreliable, limiting visibility needed for effective oversight of these costs. The SAR statute requires that life-cycle cost reporting for major weapon systems be uniform, to the extent practicable, across the department, but GAO found a number of inconsistent practices in how program offices were reporting life-cycle O&S cost estimates in the SAR. Program offices were inconsistent in (1) the explanatory information they included with the cost estimates; (2) the source of the cost estimate they cited as the basis for the reported costs; (3) the unit of measure they used to portray average costs; (4) the frequency with which they updated reported costs; and (5) the reporting of costs for an antecedent system being replaced by the new weapon system. For example, 35 (42 percent) of the 84 programs that reported O&S costs in the 2010 SAR did not cite a source of these data, contrary to DOD’s guidance, and 57 (68 percent) of the programs did not report O&S costs for an antecedent system. Also, O&S cost submissions in the SAR did not always incorporate best practices for presenting cost estimates, such as tracking cost changes over time and identifying cost drivers. In addition, 11 systems did not provide O&S cost estimates in the 2010 SAR. Although SARs are intended to provide Congress with authoritative program information on major weapon systems, 7 of the 15 sample programs GAO reviewed submitted unreliable O&S cost estimate data in the 2007, 2009, or 2010 SARs. For example, an Air Force program underreported O&S costs by $2.1 billion (fiscal year 2002 dollars), or 18 percent. While some of the program offices did not provide an explanation for the errors in the submitted data, others cited specific reasons. For example, one Navy program office underreported O&S costs in the SAR and explained that it excluded certain costs that were not under its control, such as externally funded spare parts and military personnel. However, excluding such costs is contrary to the SAR statute. An Air Force program reported current and projected funding for the program rather than estimated life-cycle O&S costs. This practice also had the effect of underreporting these costs. DOD’s reports to Congress on estimated weapon system O&S costs were often inconsistent and sometimes unreliable due to a lack of (1) detailed implementation guidance for reporting these costs and (2) an effective process for reviewing the O&S cost sections of the SAR before final submission to Congress. DOD’s guidance collectively provides minimal instructions for O&S cost reporting. The guidance also does not incorporate some of the best practices GAO has identified for presenting cost estimates. Further, although the SAR data submitted by program offices are subject to multiple reviews within the military services and by the Office of the Secretary of Defense, this review process has not provided assurance that O&S costs are reported consistently and reliably. In the absence of improvements to the SAR guidance and to the review process, deficiencies in reporting O&S costs are likely to continue. Improved reporting of O&S costs in the SAR could help to place more emphasis on assessing, managing, and controlling long-term weapon system O&S costs. To enhance visibility of weapon system O&S costs during acquisition, GAO recommends that DOD improve its guidance to program offices on cost reporting and also improve its process for reviewing these costs prior to final submission of the SAR to Congress. DOD concurred with GAO’s recommendations.
The South Florida ecosystem encompasses a broad range of natural, urban, and agricultural areas surrounding the remnant Everglades. Before human intervention, freshwater in the ecosystem flowed south from Lake Okeechobee to Florida Bay in a broad, slow-moving sheet, creating the mix of wetlands that form the ecosystem. These wetlands, interspersed with dry areas, created habitat for abundant wildlife, fish, and birds. The South Florida ecosystem is also home to 6.5 million people and supports a large agricultural, tourist, and industrial economy. To facilitate development in the area, in 1948, Congress authorized the U.S. Army Corps of Engineers to build the Central and Southern Florida Project—a system of more than 1,700 miles of canals and levees and 16 major pump stations—to prevent flooding and intrusion of saltwater into freshwater aquifers on the Atlantic coast. The engineering changes that resulted from the project, and subsequent agricultural, industrial, and urban development, reduced the Everglades ecosystem to about half its original size, causing detrimental effects to fish, bird, and other wildlife habitats and to water quality. Figure 1 shows the historic and current flows of the Everglades ecosystem as well as the proposed restored flow. Efforts to reverse the detrimental effects of development on the ecosystem led to the formal establishment of the Task Force, authorized by the Water Resources Development Act (WRDA) of 1996. The Task Force, charged with coordinating and facilitating the restoration of the ecosystem, established three overall goals to: Get the water right: restore more natural hydrologic functions to the ecosystem while providing adequate water supplies and flood control. The goal is to deliver the right amount of water, of the right quality, to the right places at the right times. Restore, protect, and preserve the natural system: restore lost and altered habitats and change current land use patterns. Growth and development have displaced and disconnected natural habitats and the spread of invasive species has caused sharp declines in native plant and animal populations. Foster the compatibility of the built and natural systems: find development patterns that are complementary to ecosystem restoration and to a restored natural system. Figure 2 shows the relationship of the agencies participating in restoration, the Task Force, and the three restoration goals. Because of the complexity of the ecosystem and efforts underway to restore it, and the urgency to begin the long-term ecosystem restoration effort, not all of the scientific information that is needed is available to make restoration decisions. As a result, scientists will continually need to develop information and restoration decision makers will continually need to review it. According to the Task Force, scientists participating in restoration are expected to identify and determine what information is needed to fill gaps in scientific knowledge critical to meeting restoration objectives and provide managers with updated scientific information for critical restoration decisions. Generally, decisions about restoration projects and plans have been—and will continue to be—made by the agencies participating in the restoration initiative. To provide agency managers and the Task Force with updated scientific information, the Task Force has endorsed adaptive management, a process that requires key tools, such as models, continued research, and monitoring plans. Federal and state agencies spent $576 million from fiscal years 1993 through 2002 to conduct mission-related scientific research, monitoring, and assessment in support of the restoration of the South Florida ecosystem. Eight federal departments and agencies spent $273 million for scientific activities, with the Department of the Interior spending $139 million (about half) of the funds. The level of federal expenditures, which increased by over 50 percent in 1997, has since remained relatively constant. The South Florida Water Management District—the state agency most heavily involved in scientific activities for restoration—spent $303 million from 1993 through 2002. The District’s expenditures have increased steadily since 1993, with significant increases in 2000 and 2002. Figure 3 shows the total federal and state expenditures for scientific activities related to restoration over the last decade. Eight federal agencies are involved in scientific activities for the restoration: the Department of the Interior’s U.S. Geological Survey, National Park Service, Fish and Wildlife Service, and Bureau of Indian Affairs; the Department of Commerce’s National Oceanic and Atmospheric Administration; the Department of Agriculture’s Agricultural Research Service; the U.S. Army Corps of Engineers; and the Environmental Protection Agency. Within the Department of the Interior, four agencies spent $139 million on scientific activities. The U.S. Geological Survey spent over half of the Interior funding, or $77 million, primarily on its Placed-Based Studies Program, which provides information, data, and models to other agencies to support decisions for ecosystem restoration and management. The National Park Service spent about $48 million for the Critical Ecosystem Studies Initiative (CESI), a program begun in 1997 to accelerate research to provide scientific information for the restoration initiative. The National Park Service used CESI funding to support research (1) to characterize the ecosystem’s predrainage and current conditions and (2) to identify indicators for monitoring the success of restoration in Everglades National Park, other parks, and public lands and to develop models and tools to assess the effects of water projects on these natural lands. Of the remaining Interior funding, the Fish and Wildlife Service and the Bureau of Indian Affairs spent $10 million and $3 million, respectively. Four agencies spent the other federal funds—$134 million. The Corps of Engineers and the National Oceanic and Atmospheric Administration spent approximately $37 million each, primarily on research activities. Two other federal agencies—the Agricultural Research Service and the Environmental Protection Agency—spent the remaining $60 million in federal funds. In addition to the $273 million spent by federal agencies, the State of Florida’s South Florida Water Management District provided $303 million for such activities from 1993 to 2002. The District spent much of its funding on scientific activities related to water projects in line with its major responsibility to manage and operate the Central and Southern Florida Project and water resources in the ecosystem. With these federal and state expenditures, scientists have made some progress in developing scientific information and adaptive management tools. In particular, scientists now better understand the historic and current hydrological conditions in the ecosystem and developed models that allow them to forecast the effects of water management alternatives on the ecosystem. Scientists also made significant progress in developing information on the sources, transformations, and fate of mercury—a contaminant that affects water quality and the health of birds, animals, and humans—in the South Florida ecosystem. Specifically, scientists determined that atmospheric sources account for greater than 95 percent of the mercury that is added to the ecosystem. In addition, scientists made progress in developing (1) a method that uses a natural predator to control Melaleuca, an invasive species, and (2) techniques to reduce high levels of nutrients—primarily phosphorus—in the ecosystem. While scientists made progress in developing scientific information, they also identified significant gaps in scientific information and adaptive management tools that, if not addressed in the near future, will hinder the overall success of the restoration effort. We reviewed 10 critical restoration projects and plans and discussed the scientific information needs remaining for these projects with scientists and project managers. On the basis of our review, we identified three types of gaps in scientific information: (1) gaps that threaten systemwide restoration if they are not addressed; (2) gaps that threaten the success of particular restoration projects if they are not addressed; and (3) gaps in information and tools that will prevent restoration officials from using adaptive management to pursue restoration goals. An example of a gap that could hinder systemwide restoration is information on contaminants, such as fertilizers and pesticides. Scientists are concerned that the heavy use of fertilizers and pesticides—which are transported by water and soil and are deposited in sediments—near natural areas in South Florida increases the discharge of chemical compounds into these areas. Contaminants are absorbed by organisms such as aquatic insects, other invertebrates, and fish that live in the water and sediment, affecting the survival and reproduction of these organisms and those that feed on them. Scientists need information on the amount of contaminants that could be discharged into the environment, the amounts that persist in water and sediment, and the risks faced by organisms living in areas with contaminants—even low levels of contaminants on a long- term basis. If this information is not available, scientists cannot determine whether contaminants harm fish and other organisms or whether the redistribution of water will introduce potentially harmful contaminants to parts of the ecosystem that are relatively undisturbed. An example of a gap that could hinder the progress of a specific project is information needed to complete the Modified Water Delivery project, which has been ongoing for many years and has been delayed primarily because of land acquisition conflicts. The Modified Water Delivery project and a related project in the Comprehensive Everglades Restoration Plan are expected, among other purposes, to increase the amount of water running through the eastern part of Everglades National Park and restore the “ridge and slough” habitat. However, scientists identified the need for continued work to understand the role of flowing water in the creation of ridge and slough habitat. If the information is not developed, the project designs may be delayed or inadequate, forcing scientists and project managers to spend time redesigning projects or making unnecessary modifications to those already built. An example of a gap in key tools needed for adaptive management is the lack of mathematical models that would allow scientists to simulate aspects of the ecosystem and better understand how the ecosystem responds to restoration actions. Scientists identified the need for several important models including models for Florida Bay, Biscayne Bay, and systemwide vegetation. Without such tools, the process of adaptive management will be hindered because scientists and managers will be less able to monitor and assess key indicators of restoration and evaluate the effects created by particular restoration actions. The Water Resources Development Act of 1996 requires the Task Force to coordinate scientific research for South Florida restoration; however, the Task Force has not established an effective means to do so, diminishing assurance that key scientific information will be developed and available to fill gaps and support restoration decisions. The SCT’s main responsibilities are planning scientific activities for restoration, ensuring the development of a monitoring plan, synthesizing scientific information, and conducting science conferences and workshops on major issues such as invasive species and sustainable agriculture. As the restoration has proceeded, other groups have been created to manage scientific activities and information for particular programs or issues, but these groups are more narrowly focused than the SCT. These groups and a more detailed discussion of their individual purposes appear in appendix I. Although the Task Force created the SCT as a science coordination group, it established the group with several organizational limitations, contributing to the SCT’s inability to accomplish several important functions. Specifically, the Task Force did not: Provide specific planning requirements, including requirements for a science plan or comprehensive monitoring plan. A science plan would (1) facilitate coordination of the multiple agency science plans and programs, (2) identify key gaps in scientific information and tools, (3) prioritize scientific activities needed to fill such gaps, and (4) recommend agencies with expertise to fund and conduct work to fill these gaps. In addition, a comprehensive monitoring plan would support the evaluation of restoration activities. This plan would identify measures and indicators of a restored ecosystem—for all three goals of restoration—and would provide scientists with a key tool to implement adaptive management. Establish processes that (1) provide management input for science planning and (2) identify and prioritize scientific issues for the SCT to address in its synthesis reports. Scientists and managers have both noted the need for an effective process that allows the Task Force to identify significant restoration management issues or questions that scientific activities need to address. In addition, a process used to select issues for synthesis reports needs to be transparent to members of the SCT and the Task Force and needs to facilitate the provision of a credible list of issues that the SCT needs to address in its synthesis reports. One way that other scientific groups involved in restoration efforts, such as the Chesapeake Bay effort, address transparency and credibility is the use an advisory board to provide an independent review of the scientific plans, reports, and issues. Provide resources for carrying out its responsibilities. Only two agencies—the U.S. Geological Survey and the South Florida Water Management District—have allocated some staff time for SCT duties. In comparison, leaders of other large ecosystem restoration efforts—the San Francisco Bay and Chesapeake Bay area efforts—have recognized that significant resources are required to coordinate science for such efforts. These scientists and managers stated that their coordination groups have full-time leadership (an executive director or chief scientist), several full- time staff to coordinate agencies’ science efforts and develop plans and reports, and administrative staff to support functions. To improve the coordination of scientific activities for the South Florida ecosystem restoration initiative, we recommended in our report—released today—that the Secretary of the Interior, as chair of the Task Force, take several actions to strengthen the SCT. First, the plans and documents to be produced by the SCT should be specified, along with time frames for completing them. Second, a process should be established to provide Task Force input into planning for scientific activities. Third, a process—such as independent advisory board review—should be established to prioritize the issues requiring synthesis of scientific information. Finally, an assessment of the SCT’s resource needs should be made and sufficient staff resources should be allocated to SCT efforts. In commenting on a draft of our report, the Department of the Interior agreed with the premises of our report that scientific activities for restoration need to be better coordinated and the SCT’s responsibilities need to be clarified. However, Interior noted that the Task Force itself will ultimately need to agree on the actions necessary to strengthen the SCT. Although Interior agreed to coordinate the comments of the Task Force agencies, it could not do so because this would require the public disclosure of the draft report. Mr. Chairman, this concludes my formal statement. If you or other Members of the Subcommittee have any questions, I will be pleased to answer them. For further information on this testimony, please contact Barry T. Hill at (202) 512-3841. Individuals making key contributions to this testimony included Susan Iott, Chet Janik, Beverly Peterson, and Shelby Stephan. The South Florida Ecosystem Restoration Task Force (Task Force) and participating agencies have created several groups with responsibilities for various scientific activities. One of these teams—the Science Coordination Team (SCT) created by the Task Force—is the only group responsible for coordinating restoration science activities that relate to all three of the Task Force’s restoration goals (see fig. 4). Other teams that have been created with responsibility for scientific activities include the Restoration Coordination and Verification (RECOVER) program teams, the Multi-Species Ecosystem Recovery Implementation Team, the Noxious Exotic Weed Task Team, and the Committee on Restoration of the Greater Everglades Ecosystem (CROGEE). As shown in figure 4, each of these teams is responsible for scientific activities related to specific aspects of restoration.
Restoration of the South Florida ecosystem is a complex, long-term federal and state undertaking that requires the development of extensive scientific information. GAO was asked to report on the funds spent on scientific activities for restoration, the gaps that exist in scientific information, and the extent to which scientific activities are being coordinated. From fiscal years 1993 through 2002, eight federal agencies and one state agency collectively spent $576 million to conduct mission-related scientific research, monitoring, and assessment in support of the restoration of the South Florida ecosystem. With this funding, which was almost evenly split between the federal agencies and the state agency, scientists have made progress in developing information--including information on the past, present, and future flow of water in the ecosystem--for restoration. While some scientific information has been obtained and understanding of the ecosystem improved, key gaps remain in scientific information needed for restoration. If not addressed quickly, these gaps could hinder the success of restoration. One particularly important gap is the lack of information regarding the amount and risk of contaminants, such as fertilizers and pesticides, in water and sediment throughout the ecosystem. The South Florida Ecosystem Restoration Task Force--comprised of federal, state, local, and tribal entities--is responsible for coordinating the South Florida ecosystem restoration initiative. The Task Force is also responsible for coordinating scientific activities for restoration, but has yet to establish an effective means of doing so. In 1997, it created the Science Coordination Team (SCT) to coordinate the science activities of the many agencies participating in restoration. However, the Task Force did not give the SCT clear direction to carry out its responsibilities in support of the Task Force and restoration. Furthermore, unlike the full-time science coordinating bodies created for other restoration efforts, the SCT functions as a voluntary group with no full-time and few part-time staff. Without an effective means to coordinate restoration, the Task Force cannot ensure that restoration decisions are based on sound scientific information.
FPS assesses risk and recommends countermeasures to GSA and tenant agencies; however, FPS’s ability to use risk management to influence the allocation of resources is limited because resource allocation decisions are the responsibility of GSA and tenant agencies—in the form of Facility Security Committees (FSC)—who have at times been unwilling to fund the countermeasures FPS recommends. We have found that, under the current risk management approach, the security equipment that FPS recommends and is responsible for acquiring, installing, and maintaining may not be implemented for several reasons including the following: tenant agencies may not have the security expertise needed to make risk- based decisions, tenant agencies may find the associated costs prohibitive, the timing of the assessment process may be inconsistent with tenant agencies’ budget cycles, consensus may be difficult to build amount multiple tenant agencies, or tenant agencies may lack a complete understanding of why recommended countermeasures are necessary because they do not receive security assessments in their entirety. For example, in August 2007, FPS recommended a security equipment countermeasure—the upgrade of a surveillance system shared by two high-security locations that, according to FPS officials, would cost around $650,000. While members of one FSC told us they approved spending between $350,000 and $375,000 to fund their agencies’ share of the countermeasure, they said that the FSC of the other location would not approve funding; therefore, FPS could not upgrade the system as it had recommended. In November 2008, FPS officials told us that they were moving ahead with the project by drawing on unexpended revenues from the two locations’ building-specific fees as well as the funding that was approved by one of the FSCs. Furthermore, FPS officials, in May 2009, told us that all cameras had been repaired, and all monitoring and recording devices had been replaced, and that the two FSCs had approved additional upgrades, which FPS was implementing. As we reported in June 2008, we have found other instances in which recommended security countermeasures were not implemented at some of the buildings we visited because FSC members could not agree on which countermeasures to implement or were unable to obtain funding from their agencies. Currently no guidelines exist outlining the requirements for FSCs including their composition, requirements, and relationship with FPS. The Interagency Security Committee (ISC), which is chaired within NPPD, recently began to develop guidance for FSC operations, which may address some of these issues. The ISC, however, has yet to announce an anticipated date for issuance of this guidance. Compounding this situation, FPS takes a building-by-building approach to risk management, using an outdated risk assessment tool to create facility security assessments (FSA), rather than taking a more comprehensive, strategic approach and assessing risks among all buildings in GSA’s inventory and recommending countermeasure priorities to GSA and tenant agencies. As a result, the current approach provides less assurance that the most critical risks at federal buildings across the country are being prioritized and mitigated. Also, GSA and tenant agencies have concerns about the quality and timeliness of FPS’s risk assessment services and are taking steps to obtain their own risk assessments. For example, GSA officials told us they have had difficulties receiving timely risk assessments from FPS for space GSA is considering leasing. These risk assessments must be completed before GSA can take possession of the property and lease it to tenant agencies. An inefficient risk assessment process for new lease projects can add to costs for GSA and create problems for both GSA and tenant agencies that have been planning for a move. Therefore, GSA is updating a risk assessment tool that it began developing in 1998, but has not recently used, to better ensure the timeliness and comprehensiveness of these risk assessments. GSA officials told us that, in the future, they may use this tool for other physical security activities, such as conducting other types of risk assessments and determining security countermeasures for new facilities. Additionally, although tenant agencies have typically taken responsibility for assessing risk and securing the interior of their buildings, assessing exterior risks requires additional expertise and resources. This is an inefficient approach considering that tenant agencies are paying FPS to assess building security. While FPS is currently operating at its congressionally mandated staffing level of no fewer than 1,200 full-time employees, FPS has experienced difficulty determining its optimal staffing level to protect federal facilities. Prior to this mandate, FPS’s staff was steadily declining and, as a result, critical law enforcement services have been reduced or eliminated. For example, FPS has largely eliminated its use of proactive patrol to prevent or detect criminal violations at many GSA buildings. According to some FPS officials at regions we visited, not providing proactive patrol has limited its law enforcement personnel to a reactive force. Additionally, officials stated that, in the past, proactive patrol permitted its police officers and inspectors to identify and apprehend individuals that were surveilling GSA buildings. In contrast, when FPS is not able to patrol federal buildings, there is increased potential for illegal entry and other criminal activity. In one city we visited, a deceased individual had been found in a vacant GSA facility that was not regularly patrolled by FPS. FPS officials stated that the deceased individual had been inside the building for approximately 3 months. In addition to the elimination of proactive patrol, many FPS regions have reduced their hours of operation for providing law enforcement services in multiple locations, which has resulted in a lack of coverage when most federal employees are either entering or leaving federal buildings or on weekends when some facilities remain open to the public. Moreover, some FPS police officers and inspectors also said that reducing hours has increased their response times in some locations by as much as a few hours to a couple of days, depending on the location of the incident. The decrease in FPS’s duty hours has also jeopardized police officer and inspector safety, as well as building security. Some inspectors said that they are frequently in dangerous situations without any FPS backup because many regions have reduced their hours of operations and overtime. In 2008, FPS transitioned to an inspector-based workforce—eliminating the police officer position—and is relying primarily on FPS inspectors for both law enforcement and physical security activities, which has hampered its ability to protect federal facilities. FPS believes that an inspector-based workforce approach ensures that its staff has the right mix of technical skills and training needed to accomplish its mission. However, FPS’s ability to provide law enforcement services under its inspector-based workforce approach may be diminished because FPS relies on its inspectors to provide both law enforcement and physical security services simultaneously. This approach has contributed to a number of issues. For example, FPS faces difficulty ensuring the quality and timeliness of FSAs and adequate oversight of its 15,000 contract security guards. In addition, in our 2008 report, we found that representatives of several local law enforcement agencies we visited were unaware of FPS’s transition to an inspector-based workforce and stated that their agencies did not have the capacity to take on the additional job of responding to incidents at federal facilities. In April 2007, a DHS official and several FPS inspectors testified before Congress that FPS’s inspector- based workforce approach requires increased reliance on state and local law enforcement agencies for assistance with crime and other incidents at GSA facilities and that FPS would seek to enter into memorandums of agreement (MOA) with local law enforcement agencies. However, according to FPS’s Director, the agency decided not to pursue MOA with local law enforcement officials, in part because of reluctance on the part of local law enforcement officials to sign such MOAs. In addition, FPS believes that the MOAs are not necessary because 96 percent of the properties in its inventory are listed as concurrent jurisdiction facilities where both federal and state governments have jurisdiction over the property. Nevertheless, these MOAs would clarify roles and responsibilities of local law enforcement agencies when responding to crime or other incidents. FPS does not fully ensure that its contract security guards have the training and certifications required to be deployed to a GSA building. FPS maintains a contract security guard force of about 15,000 guards that are primarily responsible for controlling access to federal facilities by (1) checking the identification of government employees, as well as members of the public who work in and visit federal facilities and (2) operating security equipment, including X-ray machines and magnetometers, to screen for prohibited materials such as firearms, knives, explosives, or items intended to be used to fabricate an explosive or incendiary device. We reported in July 2009, that 411 of the 663 guards (62 percent) employed by seven FPS contractors and deployed to federal facilities had at least one expired certification, including a declaration that the guards have not been convicted of domestic violence, which makes them ineligible to carry firearms. We also reported in July 2009, that FPS guards had not received adequate training to conduct their responsibilities. FPS requires that all prospective guards complete about 128 hours of training including 16 hours of X-ray and magnetometer training. However, in one region, FPS has not provided the X-ray or magnetometer training to its 1,500 guards since 2004. Nonetheless, these guards are assigned to posts at GSA buildings. X-ray training is critical because guards control access points at buildings. In addition, we also found that some guards were not provided building- specific training, such as what actions to take during a building evacuation or a building emergency. This lack of training may have contributed to several incidents where guards neglected their assigned responsibilities. Following are some examples: at a level IV facility, the guards did not follow evacuation procedures and left two access points unattended, thereby leaving the facility vulnerable; at a level IV facility, the guard allowed employees to enter the building while an incident involving suspicious packages was being investigated; and at a level III facility, the guard allowed employees to access the area affected by a suspicious package; this area was required to be evacuated. We also found that FPS has limited assurance that its guards are complying with post orders. In July 2009, we reported that FPS does not have specific national guidance on when and how guard inspections should be performed. Consequently, inspections of guard posts in 6 of the 11 regions we visited were inconsistent and varied in quality. We also found that guard inspections in the 6 regions we visited are typically completed by FPS during regular business hours and in locations where FPS has a field office and seldom at nights or on weekends or in nonmetropolitan areas. For example, in 2008, tenants in a level IV federal facility in a nonmetropolitan area complained to a GSA property manager that they had not seen FPS in over 2 years, there was no management of their guards, and the number of incidents at their facility was increasing. GSA officials contacted FPS officials and requested FPS to send inspectors to the facility to address the problems. Most guards are also stationed at fixed posts that they are not permitted to leave, which can impact their response to incidents. For example, we interviewed over 50 guards and asked them whether they would assist an FPS inspector chasing an individual in handcuffs escaping a federal facility. The guards’ responses varied, and some guards stated they would likely do nothing and stay at their posts because they feared being fired for leaving. Other guards also told us that they would not intervene because of the threat of a liability suit for use of force and did not want to risk losing their jobs. Additionally, guards do not have arrest authority, although contract guards do have authority to detain individuals. However, according to some regional officials, contract guards do not exercise their detention authority also because of liability concerns. We found that GSA—the owner and lessee of many FPS protected facilities—has not been satisfied with the level of service FPS has provided since FPS transferred to DHS. For example, according to GSA officials, FPS has not been responsive and timely in providing assessments for new leases. GSA officials in one region told us that the quality of the assessments differs depending on the individual conducting the assessment. This official added that different inspectors will conduct assessments for the same building so there is rarely consistency from year to year, and often inspectors do not seem to be able to fully explain the countermeasures that they are recommending. We believe that FPS and GSA’s information sharing and coordination challenges are primarily a result of not finalizing a new MOA that formalizes their roles and responsibilities. According to GSA officials, in November 2009, the two agencies have met to start working through the MOA section by section, and as of early March 2010 they have had four working group sessions and are anticipating an initial agreed upon draft in late spring 2010. In the absence of a clearly defined and enforced MOA, FPS officials told us they feel they are limited in their ability to protect GSA properties. Additionally, in 2009, we reported that tenant agencies have mixed views about some of the services they pay FPS to provide. For example, according to our generalizable survey of tenant agencies, About 82 percent of FPS’s customers indicated they do not use FPS as their primary law enforcement agency in emergency situations, and said they primarily rely on other agencies such as local law enforcement, the U.S. Marshals Service, or the Federal Bureau of Investigation; 18 percent rely on FPS. About one-third of FPS’s customers indicated that they were satisfied with FPS’s level of communication, one-third were neutral or dissatisfied, while the remaining one-third could not comment on how satisfied or dissatisfied they were with FPS’s level of communication on various topics including building security assessments, threats to their facility, and security guidance This response that suggests that the division of roles and responsibilities between FPS and its customers is unclear. Our survey also suggests that this lack of clarity is partly due to the little or no interaction customers have with FPS officers. Examples are as follows: A respondent in a leased facility commented that FPS has very limited resources, and the resources that are available are assigned to the primary federally owned building in the region. A respondent remembered only one visit from an FPS officer in the last 12 years. Over the past 5 years, we have conducted a body of work reviewing the operations of FPS and its ability to adequately protect federal facilities and we have made numerous recommendations to address these challenges. For example, we recommended FPS improve its effective long-term human capital planning, clarify roles and responsibilities of local law enforcement agencies in regard to responding to incidents at GSA facilities, develop and implement performance measures in various aspects of its operations, and improve its data collection and quality across its operations. While FPS has generally agreed with all of our recommendations, it has not completed many related corrective actions. At the request of Congress we are in the process of evaluating some of FPS’s most recent actions. For example, FPS is developing the Risk Assessment and Management Program (RAMP), which could enhance its approach to assessing risk, managing human capital, and measuring performance. With regard to improving the effectiveness of FPS’s risk management approach and the quality of FSAs, FPS believes RAMP will provide inspectors with the information needed to make more informed and defensible recommendations for security countermeasures. FPS also anticipates that RAMP will allow inspectors to obtain information from one electronic source, generate reports automatically, track selected countermeasures throughout their life cycle, and address some concerns about the subjectivity inherent in FSAs. In response to our July 2009 testimony, FPS took a number of immediate actions with respect to contract guard management. For example, the Director of FPS instructed Regional Directors to accelerate the implementation of FPS’s requirement that two guard posts at Level IV facilities be inspected weekly. FPS also required more X-ray and magnetometer training for inspectors and guards. To improve its coordination with GSA, the FPS Director and the Director of GSA’s Public Buildings Service Building Security and Policy Division participate in an ISC executive steering committee, which sets the committee’s priorities and agendas for ISC’s quarterly meetings. Additionally, FPS and GSA have established an Executive Advisory Council to enhance the coordination and communication of security strategies, policies, guidance, and activities with tenant agencies in GSA buildings. This council could enhance communication and coordination between FPS and GSA, and provide a vehicle for FPS, GSA, and tenant agencies to work together to identify common problems and devise solutions. We plan to provide Congress with our final reports on FPS’s oversight of its contract guard program and our other ongoing FPS work later this year. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For further information on this testimony, please contact me at (202) 512- 2834 or by e-mail at goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Tida Barakat; Jonathan Carver; Delwen Jones; and Susan Michal-Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent events including last month's attack on Internal Revenue Service offices in Texas, and the January 2010 shooting in the lobby of the Nevada, federal courthouse demonstrate the continued vulnerability of federal facilities and the safety of the federal employees who occupy them. These events also highlight the continued challenges involved in protecting federal real property and reiterate the importance of protecting the over 1 million government employees, as well as members of the public, who work in and visit the nearly 9,000 federal facilities. This testimony is based on past GAO reports and testimonies and discusses challenges Federal Protective Service (FPS) faces in protecting federal facilities and tenant agencies' perspective of FPS's services. To perform this work, GAO visited a number of federal facilities, surveyed tenant agencies, analyzed documents, and interviewed officials from several federal agencies. Over the past 5 years GAO has reported that FPS faces a number of operational challenges protecting federal facilities, including: (1) FPS's ability to manage risk across federal facilities and implement security countermeasures is limited. FPS assesses risk and recommends countermeasures to the General Services Administration (GSA) and its tenant agencies, however decisions to implement these countermeasures are the responsibility of GSA and tenant agencies who have at times been unwilling to fund the countermeasures. Additionally, FPS takes a building-by-building approach to risk management, rather than taking a more comprehensive, strategic approach and assessing risks among all buildings in GSA's inventory and recommending countermeasure priorities to GSA and tenant agencies. (2) FPS has experienced difficulty ensuring that it has sufficient staff and its inspector-based workforce approach raises questions about protection of federal facilities. While FPS is currently operating at its congressionally mandated staffing level of no fewer than 1,200 full-time employees, FPS has experienced difficulty determining its optimal staffing level to protect federal facilities. Additionally, until recently FPS's staff was steadily declining and as a result critical law enforcement services have been reduced or eliminated. (3) FPS does not fully ensure that its contract security guards have the training and certifications required to be deployed to a federal facility. GAO found that FPS guards had not received adequate training to conduct their responsibilities. Specifically, some guards were not provided building-specific training, such as what actions to take during a building evacuation or a building emergency. This lack of training may have contributed to several incidents where guards neglected their assigned responsibilities. GSA has not been satisfied with FPS's performance, and some tenant agencies are unclear on FPS's role in protecting federal facilities. According to GSA, FPS has not been responsive and timely in providing security assessments for new leases. About one-third of FPS's customers could not comment on FPS's level of communication on various topics including security assessments, a response that suggests that the division of roles and responsibilities between FPS and its customers is unclear. FPS is taking some steps to better protect federal facilities. For example, FPS is developing a new risk assessment program and has recently focused on improving oversight of its contract guard program.
Future A2/AD challenges are part of a security environment that will be characterized by increasing complexity, uncertainty, and rapid change, according to DOD. Further, national security challenges will continue to arise from ongoing concerns such as violent extremism, the proliferation of weapons of mass destruction, resource competition, and the rise of modern competitor states, among others. These concerns, according to DOD, combined with the proliferation of advanced technologies; the increasing importance of space and cyberspace; and the ubiquity of digital networks, including social media, will make the future security environment less predictable, more complex, and potentially more dangerous than it is today. The JOAC notes that challenges to operational access are not new but that three trends promise to significantly complicate DOD’s ability to establish operational access.are According to the JOAC, the three trends Technology Improvement and Proliferation: The first important trend is the dramatic improvement and proliferation of weapons and other technologies capable of denying access or freedom of action within an operational area. Specifically, an increasing number of state and nonstate actors are developing or obtaining weapons of increasing range and accuracy. Space and Cyberspace Emergence: The second and related trend is the emergence of space and cyberspace as increasingly important and contested domains. According to the JOAC, the U.S. military will continue to derive great benefit from its space and cyberspace capabilities, but potential adversaries understand that and are increasingly targeting those capabilities. Operating in the space and cyberspace domains is also attractive to potential adversaries because actions in those domains are often difficult to attribute. Posture Changes: The third trend is that the change in U.S. overseas defense posture complicates the U.S. ability to obtain operational access. Specifically, DOD has reduced the number of overseas facilities and number of deployed forces, meaning that future operations will likely require it to deploy over longer distances. According to the JOAC, the effect of these three trends is that potential adversaries who may have once perceived that they could not stop U.S. forces from deploying into an operational area are now adopting A2/AD strategies. Figure 1 provides examples of anti-access and area denial capabilities. The JOAC describes A2/AD challenges in the context of an adversary’s strategy rather than a list of technical capabilities that need to be overcome. In general, the intent of an adversary that adopts an A2/AD strategy is to convince and, if necessary and possible, compel the United States to keep out of its affairs. At the most sophisticated level, an A2/AD strategy is not a sequential series of actions using specific military capabilities but rather an integrated and adaptive campaign using all levers of national power and influence before, during, and after any actual military conflict. Critical elements of an A2/AD strategy include keeping U.S. forces as far away as possible and imposing steeper costs on the United States than it is willing to bear. Militarily, an A2/AD environment is characterized by sophisticated adversaries using asymmetric capabilities, such as electronic and cyber warfare, space capabilities, advanced air defenses, missiles, and mines, according to DOD. The advanced weapons and technologies are characterized by their increasing precision and range, and are often affordable and increasingly proliferated. Adversaries could range from a high-end peer state that has integrated a wide range of domestically produced advanced capabilities to states, including failed or failing states, adopting a hybrid strategy that includes regular and irregular forces and a number of sophisticated weapons and technology developed at home or acquired abroad. Even nonstate actors could obtain some A2/AD capabilities, such as guided anti-ship missiles and cyber attack tools, according to DOD. Figure 2 depicts the range of A2/AD challenges. DOD has increasingly focused over the past few years on the operational access challenges it may face in the future, although it has recognized A2/AD challenges for well over a decade. For example, projecting and sustaining U.S. forces in distant A2/AD environments and defeating A2/AD threats was one of six operational goals identified in the 2001 Quadrennial Defense Review (QDR). However, DOD’s focus over the subsequent decade was on operations in Afghanistan and Iraq. As those operations began to wind down, DOD began to reemphasize the need to be able to overcome challenges to operational access. The 2012 Defense Strategic Guidance was intended to transition the department from an emphasis on current operations to preparing for future challenges, including helping guide decisions regarding the size and shape of the future force in a more fiscally constrained environment. In the guidance, the Secretary of Defense established projecting power despite A2/AD challenges as 1 of 10 primary DOD missions, noting that countries such as Iran and China will continue to pursue capabilities such as electronic and cyber warfare and ballistic and cruise missiles to counter U.S. power projection capabilities and limit the operational access of U.S. forces. Other primary missions, such as operating effectively in cyberspace and space, deterring and defeating aggression, and providing a stabilizing presence, are also relevant to overcoming A2/AD challenges. The 2014 QDR maintains the emphasis on overcoming A2/AD challenges. It builds on the 2012 Defense Strategic Guidance and continues DOD’s transition to focusing on future challenges during a time of fiscal uncertainty. The QDR states that DOD must be prepared for a full range of conflicts, including against state powers with advanced A2/AD capabilities. Further, two of the QDR’s three strategic pillars—build security globally and project power and win decisively—emphasize the importance of being able to project power and overcome challenges to access. The 2014 QDR also stresses that innovation will be paramount across all of DOD’s activities in order to best address the increasingly complex operational environment. The Chairman of the Joint Chiefs of Staff has also issued guidance in the past 2 years that emphasizes the importance of overcoming access challenges. The Capstone Concept for Joint Operations: Joint Force 2020 is the foundational concept document that describes the Chairman’s vision for how the joint force will defend the nation against a wide range of security challenges and helps establish force development priorities. Among these priorities is developing capabilities to defeat A2/AD threats, which as noted above is the specific focus of the JOAC. The JOAC includes a list of 30 required capabilities that are essential to the implementation of the concept (see app. I). It further states that this list is neither complete nor prioritized but provides a baseline for further analysis and concept development. DOD also has a number of supporting concepts to the JOAC that provide further detail on specific aspects of operations in A2/AD environments. The first of these supporting concepts is the Air-Sea Battle Concept, which is focused on overcoming the longer- range and advanced anti-access challenges. At the direction of the Secretary of Defense, the Departments of the Navy and Air Force developed this multiservice concept focused on gaining and maintaining freedom of action in the global commons, that is, the areas of air, sea, In April 2014, the space, and cyberspace that belong to no one state.Chairman of the Joint Chiefs of Staff issued the Joint Concept for Entry Operations, a supporting concept to JOAC focused on how forces will enter onto foreign territory and immediately conduct operations in the face of adversaries with increasingly effective area-denial strategies and capabilities. There are a number of other existing concepts, as well as concepts that are being developed, that support the JOAC (see fig. 3). The Army and Marine Corps are undertaking multiple efforts to address operational access challenges, which impact a broad range of their existing missions. In light of the rapidly changing operational environment, the Army and Marine Corps are reviewing how they will need to carry out their roles and functions in part by revising their service concepts and by conducting wargames that incorporate such challenges. Further, the Army and Marine Corps have identified several areas where they have important roles in overcoming access challenges, including engagement activities and entry operations, as well as logistics and missile defense for the Army. The services are beginning to take steps to change how they carry out these roles. The Army and the Marine Corps have begun examining the impact of operational access challenges on existing missions by revising their concepts and incorporating such challenges into their wargames. For example, the Army is revising the Army Operating Concept, which generally describes how an Army commander will operate in future environments that include A2/AD challenges, and identifies required capabilities in land operations. Given future operational challenges, the draft concept states that Army forces need to be agile, responsive, adaptive, and regionally engaged across the globe, and be able to conduct distributed operations. These distributed operations would involve Army elements arriving from numerous directions and domains to distributed locations in a joint operations area. According to the draft concept, this operational approach, also discussed in the JOAC, could help to overcome A2/AD challenges because the Army forces would be more spread out and thus more difficult to target and defend against. Once completed, the Army Operating Concept is to provide guidance for the Army’s development of supporting functional concepts, which eventually inform Army assessments of capability needs, gaps, and solutions. The Marine Corps has also incorporated consideration of A2/AD challenges into Expeditionary Force 21, its capstone concept, which provides guidance for how the Marine Corps will be organized, trained, and equipped to fulfill its assigned responsibilities over the next 10 years. Published in March 2014, the concept identifies the JOAC as an input and is consistent with many of its themes, including the importance of distributed operations. Expeditionary Force 21 identifies a number of challenges to Marine Corps operations caused by A2/AD threats and proposes a number of potential solutions for how the service will overcome them, including operating from amphibious ships farther from shore and using dispersed formations. According to Marine Corps officials, the service is also developing a number of supporting concepts, including some with the Navy that will further explore proposed approaches for overcoming A2/AD challenges. These officials stated that eventually this will inform Marine Corps assessments of capability needs, gaps, and solutions. The officials added that while the capstone concept has been issued and the associated analysis and innovation is under way, developing the full range of capabilities envisioned will be a long- term endeavor. In addition, the Army and Marine Corps are incorporating operational access challenges into their wargames. Services conduct wargames for multiple reasons, including mission rehearsal, concept analysis, and doctrine validation. The Army’s Unified Quest wargames explore a broad range of future conflicts and have included A2/AD scenarios. For example, the scenario for Unified Quest 2013 was set in the 2030-2040 time frame with fictional adversaries adopting hybrid warfighting approaches that used a mix of A2/AD capabilities, including integrated air defenses, cyber warfare, and anti-ship cruise missiles. The wargame explored new operating concepts, including how to effectively fight with dispersed forces. The Marine Corps’ Expeditionary Warrior wargames have also included A2/AD challenges. For example, Expeditionary Warrior 2012 was set in 2024 in a fictional country where state and nonstate adversaries were armed with A2/AD capabilities, including cyber warfare, ballistic missiles, anti-ship cruise missiles, integrated air defense systems, mines, and submarines. The Marine Corps used this wargame, in part, to explore integration with special operations, cyber, and other joint forces. Although they have functions important to overcoming the range of A2/AD challenges, the Army and Marine Corps have focused their wargames on A2/AD challenges from states and failed or failing states with less- advanced A2/AD capabilities. A primary reason for this approach, according to Army and Marine Corps officials, is that ground forces are likely to have a larger role in failed and failing state scenarios as compared with their roles in scenarios involving a peer or near-peer competitor. Further, such conflicts are more likely than a conflict with a peer competitor (see fig. 4). The officials added that the Army and Marine Corps participate in Navy and Air Force wargames that examine the A2/AD challenges posed by peer competitors. The Army and the Marine Corps have identified several areas where they have important roles in overcoming operational access challenges. According to Army and Marine Corps officials, A2/AD challenges impact a While broad range of their existing missions but do not create new ones.A2/AD challenges impact many missions, primary missions include the engagement activities and entry operations of both services, as well as logistics and missile defense for the Army. The services are beginning to take steps to change how they carry out these missions. Some of these efforts are expected to stretch well into the next decade and beyond. The Army and the Marine Corps play a primary role in establishing access through their engagement activities and are using these opportunities to help address A2/AD challenges, according to DOD officials. The JOAC emphasizes that success in overcoming A2/AD challenges in combat often depends on activities prior to conflict that help gain and maintain access and identifies three required capabilities for such activities. According to the JOAC, such activities include multinational exercises, basing and support agreements, improving overseas facilities, prepositioning supplies, and forward-deploying forces. These types of activities help shape favorable access conditions. For example, engagement activities such as combined training or exercises, or improving a host-nation’s infrastructure, help maintain and develop good relationships with and improve the capabilities of allies and partners that then may be called upon in the event of a crisis. Also, officials from the U.S. Pacific Command (PACOM) and the U.S. Central Command (CENTCOM) emphasized the importance of engagement activities in gaining and maintaining access and stated that continued forward presence of U.S. forces in their regions may help deter potential adversaries and reassure allies and partners by signaling U.S. commitment to that region. Moreover, DOD officials stated that having Army and Marine Corps forces forward deployed conducting engagement activities helps with access challenges because these forces are already in theater and can respond more quickly if a crisis occurs than they could if they had to deploy from the United States. Both the Army and Marine Corps are developing new approaches to their engagement activities to help shape favorable access conditions. For example, the Army is testing a new operational approach in 2014, called Pacific Pathways, that changes the way the Army supplies forces for engagement activities. Rather than sending a number of small units that each conduct a single activity for a short period of time, under Pacific Pathways the Army will send a fully-equipped, combat-trained, 700- soldier battalion-sized force to participate in two or three regional exercises over the course of 90 days. Soldiers and their equipment would travel by air and sea between engagements. Similarly, the Marine Corps is also taking steps to enhance engagement activities and provide forward presence. The Marine Corps is planning on having one-third of its forces forward deployed. As part of this effort, the Marine Corps is returning to the practice of rotational deployments, where units based in the United States deploy to Japan or Australia for 6 months to train, engage allies and partners in the region, and provide forward presence. According to DOD officials, these approaches allow the forces to better fulfill their respective missions while providing the combatant commanders with more options for their employment. In addition, officials from CENTCOM, PACOM, and U.S. Special Operations Command told us they are increasingly incorporating engagement activities into their planning efforts. Moreover, the JOAC states that combatant commanders will need to coordinate these efforts with other U.S. agencies that are also conducting engagement activities. In February 2013, we testified that as DOD continues to emphasize engagement activities, to include building partner capacity, the need for efficient and effective coordination with foreign partners and within the U.S. government has become more important, in part because of fiscal challenges, which can be exacerbated by overlapping or ineffective efforts. The Army and the Marine Corps both play a primary role in conducting entry operations in an A2/AD environment, according to DOD. Entry operations are the projection and immediate employment of military forces from the sea or through the air onto foreign territory to accomplish assigned missions. The JOAC states that maintaining or expanding operational access may require entry of Army or Marine Corps forces into hostile territory to accomplish missions, such as eliminating land-based threats or initiating sustained land operations, and identifies the ability to conduct forcible entry operations as a required capability. The Army has conducted several studies, exercises, and wargames that examine entry operations in an A2/AD environment and concluded, among other things, that it must be able to deploy decisive force much more rapidly. The Army identified a number of areas requiring improvement, including enhancing engagement with friends and allies, increasing the ability to deploy small units, reducing logistics demands, and greatly advancing technologies such as vertical lift, lighter yet survivable vehicles, missile defenses, and command and control. Moreover, for Army airborne units, the Army has identified the need for capabilities such as weapon systems and vehicles that can be air- dropped in a location and provide forces with long-range, precision firepower; mobility across a range of terrain; and protection, among other things.improvements by 2025 and to have significantly improved forces in the 2040 time frame. It has further outlined an approach intended to achieve some The Marine Corps is also examining how to conduct entry operations in an A2/AD environment. According to the Marine Corps, the joint force has become brittle and risk averse because of its reliance on a small number of very advanced and expensive weapons systems that are increasingly vulnerable to A2/AD capabilities. A key force priority for overcoming A2/AD challenges is resilience, according to PACOM officials. To increase resilience, the Marine Corps is developing the idea of using a greater number of highly mobile capabilities on expeditionary advanced bases—small, temporary, austere, and distributed bases that can be established for a variety of purposes. For example, the Marine Corps could use land-based anti-ship missiles on small mobile platforms to control sea-lanes. However, according to the Marine Corps, pursuing this idea would require it to obtain new missile capabilities as well as more flexible supply and command and control systems than are currently in place. Additionally, the Marine Corps is examining operating short- takeoff/vertical-landing-capable joint strike fighters from small distributed bases; however, according to the Marine Corps, it has not yet determined the supportability requirements for this aircraft in austere environments. The Marine Corps is aware of such challenges and is in the early stages of addressing them. It has not yet completed the concepts and follow-on analyses needed to support the implementation of these ideas, according to Marine Corps officials. The Army has a fundamental role in providing logistics support in an A2/AD environment, according to DOD, and the JOAC states that increased threats and operational demands of future operations in such environments may present challenges for logistics. Specifically, the JOAC states that logistics hubs and networks may be increasingly vulnerable to attack by adversaries with A2/AD capabilities, such as cyber, counterspace, and ballistic missiles. Further, one of DOD’s and the Army’s approaches to conducting operations in an A2/AD environment is to use multiple smaller units operating independently, but supporting such units is more logistically demanding. The JOAC identifies three required capabilities for logistics, but also notes that new logistics concepts are needed to explore the challenges to logistics in an A2/AD environment and to help define required capabilities. Also, a study examining the impacts of the JOAC on joint logistics echoed this need. According to officials from the Joint Staff and the Army, they have begun revising the Joint Concept for Logistics, in part, to include A2/AD challenges. In addition, the Army is examining how it might address A2/AD challenges related to logistics. One way that the Army is proposing to mitigate the problem of increased demands on logistics is to focus efforts on decreasing the Army’s and the joint force’s demand for items such as fuel, water, and ammunition. For example, the Army’s Functional Concept for Sustainment, issued in October 2010, states that during operations in Iraq, 22 percent of all convoys into the theater per year were for fuel. The concept states that technological advances are needed to reduce the fuel demand for vehicles and energy production, among other things. In addition, the Army is exploring unmanned distribution of supplies in theater to help provide timely sustainment and reduce the exposure of soldiers to potential threats. A 2013 Army Unified Quest wargame report stated that while this technology could provide benefits, additional study is needed to understand how and when automated systems should be used, as well as the costs, such as those for maintenance, that would be involved. Another primary Army contribution to overcoming A2/AD challenges is providing active missile defenses, according to DOD. The JOAC notes that the increasing accuracy, lethality, and proliferation of ballistic and cruise missiles are a key A2/AD challenge. Further, such capabilities are attractive to potential adversaries because they are cost imposing: that is, defenses against ballistic and cruise missiles tend to be more costly than the missiles themselves. According to DOD, adversaries will use ballistic and cruise missiles to counter U.S. power projection capabilities by attacking forward bases, naval forces, and logistics support and command and control capabilities. The JOAC therefore identifies expeditionary missile defense as a required capability for overcoming access challenges. Land-based missile defense is a core Army function and a main element of DOD’s force structure, according to DOD. Although the JOAC does not provide a clear definition of what constitutes expeditionary missile defense, several characteristics of the Army’s missile defense force structure indicate that they do not meet this required capability, including the following: Mobility/supportability—The JOAC emphasizes the need for smaller and highly mobile systems requiring little support. Current Army missile defenses are transportable but lack strategic and tactical mobility, according to the Army. They also have large logistical requirements. Capacity—According to DOD, demand for missile defenses, including those provided by the Army, exceeds capacity. Missiles are the core of adversary A2/AD capabilities, and growing adversary missile inventories and improving capabilities will exacerbate capacity issues. Cost—According to DOD, current missile defenses are very expensive. By pursuing increasingly advanced missiles, adversaries are able to impose costs on the United States. Army and Army-sponsored reviews recognize some of these difficulties and have recommended that more attention be paid to other, less costly technologies that can protect against large numbers of missiles, such as directed energy weapons and railguns.Office is working with the Navy and others to develop a railgun that can provide cost-effective land-based ballistic and cruise missile defense DOD’s Strategic Capabilities capability.projectiles with sensors and existing guns, including Army artillery, to shoot down cruise missiles. These alternatives could provide high- capacity, cost-effective missile defense capabilities, but they have not yet matured into programs, according to the Strategic Capabilities Office. According to the Army, power generation, storage, and mobility issues associated with directed energy weapons and railguns will be resolved in the 2040 time frame. DOD is developing an implementation plan for the JOAC in order to bring coherence to the department’s many simultaneous efforts to overcome A2/AD challenges but has not fully established measures and milestones to gauge progress.effort to coordinate, oversee, and assess the department’s implementation of the JOAC. DOD is planning to issue the first iteration of the plan in 2014 and intends to assess and update the plan annually. However, the draft 2014 JOAC Implementation Plan is limited in scope and does not fully establish the specific measures and milestones DOD needs to allow decision makers to assess the progress the department is making, including the contributions of the Army and the Marine Corps. The Joint Staff is leading a multiyear DOD-wide effort, initiated in June 2013, to coordinate, oversee, and assess the department’s implementation of the JOAC. In order for DOD to fulfill its mission to project power despite A2/AD challenges, the 2012 Defense Strategic Guidance requires DOD to implement the JOAC. In addition, DOD guidance on concept development requires DOD to develop and execute implementation plans for joint concepts and to assess their implementation.JOAC is the first joint concept to be implemented under the new guidance, according to DOD officials. They further stated that the emphasis on implementation is a significant and positive change to the guidance but will be challenging to execute. The guidance was issued in November 2013 and the In accordance with this guidance, DOD is planning to issue the first iteration of the JOAC Implementation Plan in August 2014 and intends to assess and update the plan annually. single place where it was tracking and coordinating its efforts to address A2/AD challenges, including those of the Army and Marine Corps, even though the JOAC notes that addressing A2/AD challenges requires closer integration between services than ever before. The draft 2014 JOAC Implementation Plan states that it is intended to provide coherence by integrating, overseeing, communicating, and assessing the various efforts being taken across DOD to create the capabilities required to overcome A2/AD challenges. The first iteration of the implementation plan—the 2014 plan—remains in draft as of July 2014. officials.force development processes to gather information about current and planned activities that contributed to the implementation of the JOAC. They further noted that the JOAC implementation process may eventually address not only capability issues but also capacity issues, which officials from the Army, Marine Corps, and the combatant commands we spoke with noted were critical in terms of overcoming A2/AD challenges. These officials stated that the intent was to leverage existing Because of the large scope of the JOAC and to help familiarize stakeholders with a new process, Joint Staff officials stated that the working group decided to focus the first iteration of the plan on 10 required capabilities that it determined to be the highest priority rather than including all 30 JOAC-required capabilities. Once those capabilities were identified, officials said that working group members, including those from the Army and Marine Corps, reviewed ongoing and planned activities from their respective organizations that they believed would align with the implementation of 1 or more of the 10 prioritized capabilities. The JOAC identifies 30 required capabilities as essential to the implementation of the concept (see app. I). While the 30 capabilities are unclassified, when they are ordered in terms of priority, they become classified. Thus, the 10 capabilities that were considered the highest priority for the department are classified. The working group identified the 10 priorities by comparing DOD’s current list of prioritized gaps in the Chairman’s Capability Gap Assessment with the list of JOAC capabilities. The working group also included a special topic in the annual Chairman’s Joint Assessment that asked the services, combatant commanders, and other DOD organizations to identify the highest-priority JOAC-required capabilities. for completion determined by the organization responsible for the action that could span several years. Thus, for each capability, multiple organizations are simultaneously undertaking implementation actions with various timelines for completion. Joint Staff officials stated that the execution matrix revealed that DOD was already taking many actions addressing the 10 prioritized capabilities. Officials noted that the 165 implementation actions do not constitute the full effort required to complete implementation of these 10 required capabilities, and future iterations of the execution matrix will be updated as required based on analyses to identify additional discrete implementation actions. In addition, future iterations of the JOAC Implementation Plan will also include the other JOAC-required capabilities as well as required capabilities from other joint concepts that support the JOAC, according to Joint Staff officials. The draft 2014 JOAC Implementation Plan does not fully establish the specific measures and milestones DOD needs to allow decision makers to assess the progress the department is making, including the contributions of the Army and the Marine Corps. DOD guidance requires that all joint concepts have an implementation plan that includes measures and milestones that allow decision makers to gauge implementation progress. Further, a stated purpose of the plan is to measure progress toward the development of a joint force able to project power despite A2/AD challenges. Internal control standards in the federal government also call for agencies to provide reasonable assurance to decision makers that their objectives are being achieved and that decision makers have reliable data to determine whether they are meeting goals and using resources effectively and efficiently. Moreover, GAO’s Schedule Assessment Guide states that milestones and measures are essential for tracking an organization’s progress toward achieving intermediate and long-term goals, and helping to identify critical phases of the project and the essential activities needed to be completed within given time frames. The draft JOAC Implementation Plan identifies four stages at which the working group is to assess implementation. Implementation Actions. The working group is to assess the progress made in implementing the discrete materiel and nonmateriel actions in the execution matrix. Required Capabilities. The working group is to assess progress in implementing each JOAC-required capability based on the progress made on completing the implementation actions relevant to that capability. Operational Objectives. The Implementation Plan organizes the required capabilities into four operational objectives—the broad goals a commander must achieve in order to project power despite A2/AD challenges. The working group is to assess progress in implementing each operational objective based on the progress of the required capabilities aligned under each objective. End State. The working group is to assess progress in reaching the JOAC end state based on the implementation progress of the four operational objectives. The draft 2014 JOAC Implementation Plan includes measures and milestones for the 165 identified implementation actions but not for the other three implementation stages. Specifically, the 165 actions will be assessed as being either complete or not yet complete, according to Joint Staff officials. However, Joint Staff officials stated the working group has not yet developed the necessary measures to gauge the extent to which required capabilities, operational objectives, or the end state have been implemented. For example, the working group has not yet developed measures for how the completion of an implementation action affects the completion of the required capability to which it is tied. In other words, the aggregate of the implementation actions will show how much work has been completed—i.e., the number of actions—but it will not show how much work remains to be completed to fully implement the required capability. Thus, even if DOD completed all 165 implementation actions identified in the first plan, it currently would not be able to determine the progress in implementing the 10 required capabilities. Figure 5 shows the stages at which the draft 2014 JOAC Implementation Plan has measures and milestones. Similarly, the draft 2014 JOAC Implementation Plan does not fully identify milestones for all four implementation stages. Specifically, the plan identifies milestones for the 165 implementation actions, but not for required capabilities, operational objectives, and the end state. Moreover, the 2014 plan does not indicate if or when milestones will be established. For example, the implementation plan does not identify when the required capability for expeditionary missile defense should be completed, and Army officials told us that plans for developing this high-priority capability may take decades. Additionally, the plan does not identify milestones for implementing the operational objective related to engagement activities, which, as noted previously, is an area in which the Army and Marine Corps play primary roles. Joint Staff officials emphasized that the 2014 JOAC Implementation Plan is the first of many iterations and was intended only to provide visibility of ongoing activities relevant to the top 10 JOAC-required capabilities. Joint Staff officials stated that they intend to include ways to assess overall implementation progress in future iterations of the plan. Specifically, the draft 2014 Implementation Plan states that the working group will establish a process to aggregate implementation actions in such a way as to allow it to gauge progress at the required capability, operational objective, and end state stages. However, the draft plan provides no detail about how or when this will be accomplished. While DOD has stated its intent to assess progress in the future, its current planning lacks specifics about the measures it will employ and how it will set milestones to gauge that progress. Consequently, the draft 2014 plan is not fully consistent with DOD guidance, as well as federal internal control standards and GAO’s Schedule Assessment Guide, that emphasize the importance of tracking an organization’s progress toward achieving its goals. Without establishing specific measures and milestones in future iterations of the JOAC Implementation Plan, DOD will not be able to gauge JOAC implementation progress and assess whether efforts by the joint force, to include the Army and the Marine Corps, will achieve DOD’s goals in desired time frames in the near and long terms. Specifically, if DOD does not have a means to assess implementation progress, it may lack assurance that Army and Marine Corps efforts to address areas such as engagement activities, entry operations, logistics support, and expeditionary missile defense fully align with the JOAC. Moreover, without an effective implementation plan that allows decision makers to track progress over time, DOD will not have the assurance that it will be able to provide commanders with the forces they need to overcome A2/AD challenges envisioned to be faced by the joint force of 2020. The proliferation of relatively low-cost advanced technologies and the emergence of space and cyberspace as contested domains, along with the change in U.S. overseas defense posture, present DOD with a future operational environment that no longer includes the unimpeded operational access DOD has enjoyed for decades. As potential adversaries develop strategies aimed at preventing the U.S. military from arriving at the fight and complicating its freedom of action once there, DOD’s planning has shifted to focus on how to maintain its ability to project power into operational areas. While DOD may have initially emphasized the role of the Air Force and Navy in overcoming A2/AD challenges, the Army and the Marine Corps also have primary roles to play and are beginning to address these challenges. DOD’s effort to develop an implementation plan is a significant step and provides the foundation for a roadmap to move the JOAC from concept to implementation. However, since it does not yet include specific measures and milestones that would allow DOD to gauge JOAC implementation progress, it is not yet clear the extent to which efforts across the department to address A2/AD challenges, including those of the Army and Marine Corps, support JOAC implementation, or whether current efforts align with JOAC implementation time frames. Given that some of the department’s efforts to address JOAC-required capabilities, such as the Army’s work on missile defense, may take many years, a means to assess progress is essential. Specifically, fully establishing measures and milestones would clarify what additional steps the Army and Marine Corps may need to take to align their current efforts to address A2/AD challenges—including with respect to their key roles in engagement activities, entry operations, logistics support, and missile defense—with the required capabilities in the JOAC. Until future iterations of the JOAC Implementation Plan contain specific measures and milestones to gauge progress, DOD may find it difficult to judge whether it is on target to meet its overall goal of ensuring the joint force of 2020 can operate effectively in an A2/AD environment. To improve DOD’s ability to assess Joint Operational Access Concept implementation, including the contribution of the Army and the Marine Corps, we recommend that the Secretary of Defense direct the Joint Staff, in coordination with the Army, the Marine Corps, and other members of the working group, to establish specific measures and milestones in future iterations of the JOAC Implementation Plan to gauge how individual implementation actions contribute in the near and long terms to achieving the required capabilities, operational objectives, and end state envisioned by the department. We provided a draft of this report to DOD for review and comment. DOD provided written comments, which are summarized below and reprinted in appendix II. In its written comments, DOD partially concurred with the report’s recommendation to establish specific measures and milestones in future iterations of the JOAC Implementation Plan to gauge how individual implementation actions contribute in the near and long term to achieving the required capabilities, operational objectives, and end state envisioned by the department. In its comments, the department stated that it had previously recognized the need to assess JOAC implementation progress and that it had already begun to develop specific measures and milestones and would incorporate them into annual updates of the JOAC Implementation Plan. We noted in the report that DOD intended to include ways to assess overall implementation progress in future iterations of the implementation plan but that the draft 2014 plan did not fully establish specific measures and milestones to assess progress or provide detail for how progress would be assessed or when this would be accomplished. As also noted in the report, it is important that specific measures and milestones move beyond being able to assess progress of individual implementation actions and expand to allow the department to gauge JOAC implementation progress and assess whether efforts by the joint force, to include the Army and the Marine Corps, will achieve DOD’s goals in desired time frames in the near and long terms. In doing so, DOD will be better positioned to judge whether it is on target to meet its overall goal of ensuring the joint force of 2020 can operate effectively in an A2/AD environment. DOD also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the Secretary of the Army, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Joint Operational Access Concept (JOAC) identifies 30 capabilities considered essential to the implementation of the concept and what the future joint force will need to gain operational access in an opposed environment. According to the JOAC, the list of required capabilities is neither complete nor prioritized but provides a baseline for further analysis and concept development. The JOAC organizes the required capabilities in eight categories as described below. 1. The ability to maintain reliable connectivity and interoperability among major warfighting headquarters and supported/supporting forces while en route. 2. The ability to perform effective command and control in a degraded and/or austere communications environment. 3. The ability to create sharable, user-defined operating pictures from a common database to provide situational awareness (including friendly, enemy, and neutral situations) across the domains. 4. The ability to integrate cross-domain operations, to include at lower echelons, with the full integration of space and cyberspace operations. 5. The ability to employ mission command to enable subordinate commanders to act independently in consonance with the higher commander’s intent and effect the necessary cross-domain integration laterally at the required echelon. 6. The ability of operational forces to detect and respond to hostile computer network attack in an opposed access situation. 7. The ability to conduct timely and accurate cross-domain all-source intelligence fusion in an opposed access situation. 8. The ability to develop all categories of intelligence in any necessary domain in the context of opposed access. 9. The ability to locate, target, and suppress or neutralize hostile anti- access and area denial capabilities in complex terrain with the necessary range, precision, responsiveness, and reversible and permanent effects while limiting collateral damage. 10. The ability to leverage cross-domain cueing to detect and engage in- depth to delay, disrupt, or destroy enemy systems. 11. The ability to conduct electronic attack and computer network attack against hostile anti-access/area denial capabilities. 12. The ability to interdict enemy forces and materiel deploying to an operational area. 13. The ability to conduct and support operational maneuver over strategic distances along multiple axes of advance by air and sea. 14. The ability to “maneuver” in cyberspace to gain entry into hostile digital networks. 15. The ability to conduct en route command and control, mission planning and rehearsal, and assembly of deploying forces, to include linking up of personnel and prepositioned equipment. 16. The ability to conduct forcible entry operations, from raids and other limited-objective operations to the initiation of sustained land operations. 17. The ability to mask the approach of joint maneuver elements to enable those forces to penetrate sophisticated anti-access systems and close within striking range with acceptable risk. 18. The ability to defeat enemy targeting systems, including their precision firing capabilities. 19. The ability to provide expeditionary missile defense to counter the increased precision, lethality, and range of enemy anti-access/area denial systems. 20. The ability to protect and, if necessary, reconstitute bases and other infrastructure required to project military force, to include points of origin, ports of embarkation and debarkation, and intermediate staging bases. 21. The ability to protect forces and supplies deploying by sea and air. 22. The ability to protect friendly space forces while disrupting enemy space operations. 23. The ability to conduct cyber defense in the context of opposed access. 24. The ability to deploy, employ, and sustain forces via a global network of fixed and mobile bases, to include seabasing. 25. The ability to quickly and flexibly establish nonstandard support mechanisms, such as the use of commercial providers and facilities. 26. The ability to plan, manage, and integrate contractor support in the context of operations to gain operational access in the face of armed resistance. 27. The ability to inform and influence selected audiences to facilitate operational access before, during, and after hostilities. 28. The ability to develop relationships and partnership goals and to share capabilities and capacities to ensure access and advance long-term regional stability. 29. The ability to secure basing, navigation, and overflight rights and support agreements from regional partners. 30. The ability to provide training, supplies, equipment, and other assistance to regional partners to improve their access capabilities. In addition to the contact named above, Patricia Lentini, Assistant Director; Margaret Morgan, Assistant Director; Carolynn Cavanaugh; Colin Chambers; Nicolaas Cornelisse; Amie Steele; and Erik Wilkins- McKee made key contributions to this report.
According to DOD, its ability to deploy military forces from the United States to a conflict area is being increasingly challenged as potential adversaries pursue capabilities designed to deny access. Access can be denied by either preventing an opposing force from entering an operational area or limiting an opposing force's freedom of action within an operational area. DOD has a joint concept that broadly describes how DOD will operate effectively in such access-denied environments. DOD's initial efforts have emphasized the roles of the Air Force and Navy. GAO was mandated to review the role of the Army and Marine Corps in access-denied areas. This report (1) describes Army and Marine Corps efforts to address operational access challenges and (2) analyzes the extent to which DOD is able to gauge how its efforts support implementation of its concept for future operations in access-denied environments. GAO analyzed DOD, Army, and Marine Corps concepts; reports on service-level exercises; DOD policy and guidance on concept implementation; and documents specifically related to the joint concept. GAO also interviewed cognizant DOD officials. The Army and Marine Corps are undertaking multiple efforts to address operational access challenges—challenges that impede a military force's ability to enter and conduct operations in an area—that impact a broad range of their existing missions. For example, they are incorporating operational access challenges into their wargames and revising their service concepts, which inform their assessments of capability needs, gaps, and solutions. In addition, the Army and the Marine Corps have identified important roles they play in overcoming operational access challenges and are examining ways to carry them out in access-denied environments, including engagement activities—improving access conditions through such activities as multinational exercises, prepositioning supplies, and forward presence, and entry operations—deploying forces onto foreign territory to conduct missions such as eliminating land-based threats to access. In addition, the Army has identified areas specific to its role, including logistics—sustaining forces despite increased vulnerabilities from access threats and challenges associated with new operational approaches, and missile defense—providing defense against increasingly accurate, lethal, and available ballistic and cruise missiles. The Department of Defense (DOD) is unable to gauge the extent to which its efforts to overcome operational access challenges support the implementation of the 2012 Joint Operational Access Concept (JOAC). The JOAC describes how the department will operate effectively in future operating environments with access challenges and is intended to guide the development of capabilities for the joint force of 2020. The Joint Staff is leading a multiyear DOD-wide effort, initiated in June 2013, to coordinate, oversee, and assess the department's implementation of the JOAC. DOD plans to issue the first iteration of the JOAC Implementation Plan in 2014 and to assess and update the plan annually. The draft plan focuses on the highest-priority JOAC-required capabilities and identifies related actions, but does not fully establish specific measures and milestones to gauge progress. While DOD has stated its intent to assess progress in the future, its current planning lacks specific details about the measures it will employ and the milestones it will use to gauge that progress. Until DOD establishes specific measures and milestones in future iterations of its implementation plan, the department will not be able to gauge implementation progress and assess whether efforts by the joint force, to include the Army and the Marine Corps, will achieve DOD's goals in desired time frames. As a result, DOD may lack assurance that efforts, including those currently being undertaken by the Army and the Marine Corps to address areas such as engagement activities, entry operations, logistics, and expeditionary missile defense, will fully align with the JOAC. GAO recommends that DOD establish specific measures and milestones in future iterations of the JOAC Implementation Plan to improve DOD's ability to gauge implementation progress. DOD agreed with the importance of assessing the plan and said it is developing measures and milestones and will continue to refine these tracking tools in the future.
PP&E consists of tangible assets, including land, that: (1) have an estimated useful life of 2 or more years, (2) are not intended for sale in the ordinary course of operations, and (3) have been acquired to be used or available for use by the entity. The amount of PP&E reported by agencies can aid in identifying agencies that may have deferred maintenance. These amounts only include PP&E owned by the federal government—not assets financed by the federal government but owned by other entities such as state and local governments. Table 1 presents the amount of PP&E reported for fiscal year 1996 by the 11 agencies that account for almost 99 percent of total reported PP&E. DOD is the largest single holder of PP&E in the federal government, controlling about 80 percent of the reported total, while the next largest holders—TVA, NASA and DOT—hold about 3 percent each. The new accounting requirements for deferred maintenance contained in SFFAS No. 6 have the potential to improve information on maintenance needs. SFFAS No. 6 requires that a line item for “deferred maintenance amounts” be presented on the statement of net cost. The statement of net cost is one of several financial statements. It is designed to report the gross and net costs of providing goods, services and benefits. Although no dollar amounts for deferred maintenance are to be reported on the statement of net cost itself and thus are not included in the net costs of activities, the explanatory notes to the financial statements must include dollar estimates of deferred maintenance. When agencies begin to disclose deferred maintenance in their fiscal year 1998 financial statements in compliance with the standard, the annual audits of agency financial statements will help ensure that whatever is reported is subject to independent scrutiny. As the objective of the financial statement audit is to obtain reasonable assurance about the financial statements as a whole, individually reported deferred maintenance amounts will receive varying levels of audit coverage depending on their materiality to the financial statements. Because of the nature of these estimates, the auditor’s assessment will depend in part, on management’s judgement of the asset condition, maintenance needs, and the methodology chosen to estimate deferred maintenance. Deferred maintenance is defined in SFFAS No. 6 as “maintenance that was not performed when it should have been or was scheduled to be and which, therefore, is put off or delayed for a future period.” Maintenance—described as the act of keeping fixed assets in acceptable condition—includes preventive maintenance and normal repairs, including the replacement of parts and structural components and other activities needed to preserve the asset so that it continues to provide acceptable service and achieve its expected life. Modifications or upgrades that are intended to expand the capacity of an asset are specifically excluded from the definition. SFFAS No. 6 recognizes that determining maintenance needs is a management function and accordingly allows management flexibility and judgment within broadly defined requirements. For example, the standard acknowledges that determining the asset condition—condition rating—is a management function because what constitutes acceptable condition may differ both across entities and for different items of PP&E held by the same entity. Under the standard, it is management’s responsibility to (1) determine the level of service and condition of the asset that are acceptable, (2) disclose deferred maintenance by major classes of assets, and (3) establish methods to estimate and report any material amounts of deferred maintenance. In addition, the standard has an optional disclosure for stratification between critical and noncritical amounts of maintenance. Management must decide whether to distinguish between critical and noncritical deferred maintenance amounts and, if it chooses to do so, what constitutes critical. Of the 11 agencies included in our review, nine agencies are required specifically to implement the standard for fiscal year 1998. TVA and USPS follow private sector practices in their financial statement reporting. However, TVA and USPS are included in the governmentwide financial statements and will be subject to reporting deferred maintenance under SFFAS No. 6 if their amounts prove material to the governmentwide statements. Treasury officials are addressing whether there are any significant issues regarding how to include entities in the consolidated statements that are not required to follow federal accounting standards, such as TVA and USPS. The objectives of our work were to (1) look at the plans and progress of the 11 agencies to implement the deferred maintenance requirements of SFFAS No. 6 and (2) obtain the official position of agency CFOs and IGs with respect to its implementation. To achieve these objectives we first reviewed SFFAS No. 6, including the significant considerations made by the board in developing the standard. We then developed an interview guide covering (1) previous agency experience with maintenance reporting, (2) agency management plans for and commitment to implementing deferred maintenance reporting in compliance with SFFAS No. 6, and (3) the status of agency policies and procedures for implementing such reporting. Interviews using this guide were held with 11 agency CFOs and their related staff. In addition, because of their experience with agency financial reporting, we developed an interview guide which was used to obtain agency IGs’ views about their agency’s readiness and progress towards implementing the deferred maintenance requirements and about any previous relevant audit reports. Interviews were conducted only with the IG’s at the nine agencies specifically required to implement the deferred maintenance requirements of SFFAS No. 6. Agency responses were confirmed with each agency’s CFO and IG to ensure that they accurately reflected the agency’s official position. Our work focused on departmental level implementation efforts rather than the work of individual bureaus within an agency. We also reviewed agency financial statements, relevant policy documentation, and prior GAO and IG reports on deferred maintenance. We requested written comments on a draft of this report from agency officials. Several agencies provided comments of a technical nature which were incorporated into this report. The Deputy CFO for DOT and the Under Secretary of Defense provided us with formal written comments, which are reprinted in appendixes XII and XIII, respectively. We conducted our work from September through November 1997 in accordance with generally accepted auditing standards. Throughout the rest of the report, unless otherwise noted, agencies refers to the nine agencies specifically required to implement the deferred maintenance requirements of SFFAS No. 6 for fiscal year 1998. Historically, deferred maintenance reporting was not required, thus agencies have limited experience in developing agencywide estimates of deferred maintenance or maintenance backlogs. Although all agencies said that they have estimated maintenance needs for ad hoc and budgetary purposes, only two agencies—DOI and NASA—indicated that they have made agencywide deferred maintenance estimates. These estimates have not been audited to ensure their reliability or conformance with the new requirements included in SFFAS No. 6. Four other agencies—USDA, State, DOT, and DOD—have previously made at least partial estimates of deferred maintenance for other than financial statement reporting purposes. USDA noted that its deferred maintenance estimate included activities to expand and upgrade PP&E items—which are not considered deferred maintenance under SFFAS No. 6. State’s estimate of deferred maintenance is based upon an inventory of known facility maintenance requirements. However the CFO for State cautioned that not all of these known requirements may be deferred maintenance as defined by SFFAS No. 6. DOT noted that its estimates of deferred maintenance included the Maritime Administration and the Federal Aviation Administration, but did not include the Coast Guard. Similarly, DOD cited Air Force estimates for deferred maintenance for depot and real property but had no agencywide estimate. Three agencies—DOE, GSA, and VA—did not have deferred maintenance estimates. Although DOE was able to provide policies requiring field offices/sites to manage their maintenance backlogs, the Acting CFO told us the department has no requirement for reporting to headquarters. None of the deferred maintenance estimates, including the agencywide estimates, had been subject to an independent audit. GAO and IG reports have questioned the validity of agency estimates of deferred maintenance and maintenance backlogs. For example, GAO reports on the DOI’s National Park Service confirmed deteriorating conditions at the National Parks but questioned whether the Park Service had adequate financial and program data or controls to know the nature or extent of resource problems or the effectiveness of measures taken to address the problems. Similarly, a 1993 Department of State IG report found that while the department had progressed in identifying its maintenance and repair deficiencies, information on the maintenance backlog had not been summarized, quantified or monitored. Further, the size and scope of DOD PP&E creates special problems in its reporting of deferred maintenance, many of which have been previously reported by GAO. As noted in our May 1997 report, DOD’s changes in its definition of backlogs have led to large decreases in its “unfunded requirements” for maintenance. We also noted that the military services have expressed concern about the adequacy of funding to maintain and repair all of their facilities and have reported growing maintenance and repair backlogs. However, the services also have many excess buildings which could be demolished to avoid millions of dollars of recurring maintenance costs. Also, we recently reported that, while military service installation officials cited increases in backlogs of deferred maintenance and repair projects in recent years, reliable composite information was not available due to differences in how services develop and maintain these data. Further, recent efforts by the Office of the Secretary of Defense to develop a comprehensive system for performing facilities condition assessment have not been successful and systems maintained by the individual services vary in terms of their capabilities to identify funding requirements. Most recently, we reported on DOD’s plans to implement the deferred maintenance requirements for national defense assets, noting that DOD needs to expedite plans to implement this new disclosure. In particular, we recommended that DOD (1) ensure that DOD-wide policy is in place as soon as possible so that DOD can comply with the effective date of the deferred maintenance requirements, (2) establish milestones for key actions in the policy development process to ensure issuance of the policy no later than March 1998, and (3) modify the ongoing study of existing DOD methods for determining deferred maintenance to complete the study by the end of March. Although some initial steps have been taken, significant work remains to be done for all agencies to effectively implement the deferred maintenance requirements for fiscal year 1998 reporting. CFOs at the nine agencies specifically required to implement the standard for fiscal year 1998 expressed the intention to implement the deferred maintenance requirements on time. Each had designated an individual or individuals to lead this effort. None of the nine agencies had fully addressed other implementation issues. The standard specifies that management needs to (1) determine the level of service and condition of the asset that are acceptable, (2) disclose deferred maintenance by major classes of assets, and (3) establish what method—condition assessment or life-cycle—to use to estimate and report any material amounts of deferred maintenance. Thus, the development of departmental guidance to ensure consistent reporting within an agency may be particularly important given that the standard allows flexibility within broadly defined requirements. Seven agencies had not drafted departmental guidance addressing these issues as a means of ensuring consistency in reporting and facilitating the preparation of agencywide financial statements. Further, neither of the two agencies (VA and USDA) that had developed departmental guidance specifically addressing the deferred maintenance requirements provided detailed guidance on deferred maintenance beyond that included in the standard. While all agencies could articulate their approach to implementing the deferred maintenance requirements of SFFAS No. 6, only one, GSA, had a written plan outlining preparation steps and recommended completion dates for activities important to the deferred maintenance disclosure requirements. IG views on whether agencies would be ready to implement the deferred maintenance requirements were divided. Four of the nine IGs expressed confidence that their agency would implement SFFAS No. 6 promptly. IGs from two agencies—DOD, with 80 percent of reported PP&E, and DOT—stated that their agencies would not be prepared to implement the deferred maintenance requirements and the remaining three IGs were unwilling to assess agency readiness. The DOD IG indicated that DOD’s time frame for implementation would not allow sufficient time for preparation of the fiscal year 1998 financial statements. The IG for DOT stated that the agency has not established a formal system to centrally identify or track deferred maintenance estimates and the operating administrations of DOT do not have an accurate accountability of all assets. Table 2 provides an overview of each agency IG’s assessment of whether the agency will be prepared to implement the deferred maintenance requirements on time. At the time of our review, agencies were still in the preliminary stages of preparing to implement the deferred maintenance requirements and were taking different approaches. Given these different approaches and the flexibility provided in the standard, no single indicator provides a complete picture of agency progress towards implementing the deferred maintenance requirements. For example, an agency that has issued written but general departmental guidance could not be assumed to have made a greater level of progress than an agency that has not issued departmental guidance, but has previous experience in estimating deferred maintenance or has established working groups identifying key implementation issues. Approaches used by agencies in preparing to implement the deferred maintenance requirements fall into three general categories—revision of existing policies and procedures, issuance of minimal departmental guidance, and study of implementation issues prior to the issuance of departmental guidance. The agencies using these general approaches are discussed below. Additional detail on each agency is included in appendixes I to XI. Two agencies—DOE and NASA—plan to revise established policies on estimating and reporting deferred maintenance or maintenance backlogs. Both indicated that these policies would provide the foundation for implementing the deferred maintenance requirements under SFFAS No. 6. DOE has existing policies that require field offices to estimate and document deferred maintenance amounts, but has no requirement for reporting this information to headquarters. According to DOE’s Acting CFO, her office has reviewed DOE’s existing policies and determined that for the most part the department was complying with the requirements of SFFAS No. 6. For areas not in full compliance with the standard, the department anticipates issuing new or clarifying guidance. The Acting CFO stated that her office is working to develop a cost-effective approach for accumulating data from DOE field offices and reporting this information to the department’s headquarters. DOE’s Deputy IG stated that although it was too early to make a definitive judgment about the readiness and capability of the department to implement the deferred maintenance requirements, based on the department’s representations regarding its implementation plans, it appeared that the deferred maintenance disclosure will be auditable. Appendix IV presents additional information on DOE’s implementation efforts. NASA also anticipated that with some minor adjustments the agency’s current deferred maintenance estimating and reporting process would allow the agency to meet the deferred maintenance requirements. According to NASA’s CFO, the agency expects to have any policy revisions completed by June 30, 1998, and to meet the deferred maintenance requirements without difficulty. The CFO reported that NASA policy requires that Centers continuously assess facility conditions in a manner which results in an appropriate identification and quantification (in terms of dollars) of the backlog of maintenance and repair. Once deficiencies are identified, industry standard estimating guides are used to arrive at estimated repair costs. NASA’s IG expressed the view that, based on audit experience, NASA will be able to support the deferred maintenance amounts. Appendix II presents additional information on NASA’s implementation efforts. Two agencies—VA and USDA—have developed written policies specifically addressing the deferred maintenance requirements included in SFFAS No. 6. VA’s draft policy reiterates the definitional and reporting requirements for deferred maintenance but does not provide guidance on which measurement method—life cycle or condition assessment—should be used. VA’s CFO indicated that the department is leaning towards using condition assessment and that additional guidance and specific procedures would likely be established as the policy is implemented through the department. However, if no additional guidance is provided, operating units will have to determine which method to use. The CFO also noted that the general approach will be to provide guidance to units and have the units report an estimate of deferred maintenance. The CFO office plans to use a statistical account in its general ledger to compile this information to provide the basis for the disclosure. VA’s Acting IG—citing the agency’s progress in financial reporting over the last few years—stated that the department will likely be prepared to implement the deferred maintenance requirements. The VA’s Acting IG indicated that the ability to audit any deferred maintenance disclosure will depend on the department providing an audit trail and a good system of information. He also stated that a challenge will be whether the VA issues ground rules to facilities so that consistency will occur among the 173 Medical Centers and other VA units. Appendix VII presents additional information on VA’s implementation efforts. USDA’s policy calling for the implementation of the deferred maintenance requirements is outlined in the USDA Financial and Accounting Standards Manual. The policy covers the accounting standards for PP&E and deferred maintenance. According to the Acting CFO and IG, the guidance provided in this policy conforms to SFFAS No. 6. The policy provides additional guidance on asset classification beyond that included in SFFAS No. 6 but does not provide significant additional guidance with respect to estimating deferred maintenance. USDA provides its operating administrations with most of the flexibility provided to management by SFFAS No. 6—and this is reflected in the deferred maintenance section of its policy. The Acting CFO noted that operating administration managers below the departmental level are in the best position to make determinations on what is most appropriate for a particular agency within USDA. Hence, each operating administration has the option of choosing whichever of the allowable methodologies under SFFAS No. 6 it deems most appropriate. One exception to the extension of the standard’s flexibility downward is USDA’s requirement that mission area or agency management distinguish between critical and noncritical deferred maintenance amounts and disclose the basis for that determination. USDA’s IG said that since the department has disseminated the policy for implementing the standard, the individual USDA operating administrations need to develop operating procedures for estimating and reporting deferred maintenance. Assuming that USDA operating administrations continue to emphasize financial management and the Forest Service completes its inventory of PP&E, the USDA IG expects that USDA should be able to implement the standard on time, and the disclosure should be auditable. Appendix VIII presents additional information on USDA’s implementation efforts. At the time of our review, five of the nine agencies—DOD, GSA, State, DOI, and DOT— had not yet determined the extent of detailed departmental guidance to provide with respect to implementing the deferred maintenance requirements. Most of these agencies were conducting or were planning to conduct studies to provide additional information on key implementation issues. Findings from these studies would be used to help determine the extent and content of departmental guidance. DOD contracted with the Logistics Management Institute (LMI), to assess existing DOD methods of determining, measuring, and recording deferred maintenance data for mission assets. The LMI study, which only will address DOD’s mission or defense assets and will not cover general PP&E, is expected to be completed in March 1998. DOD then plans to review the results and provide financial and logistic policy for deferred maintenance. Recent GAO reports have stated that this timetable will not allow sufficient time to ensure consistent and timely deferred maintenance disclosures because the military services may not have the DOD-wide guidance in time to develop service-specific policies and procedures for fiscal year 1998 financial statements. In addition, DOD’s Acting Comptroller stated that for general PP&E—other than real property—DOD has not yet determined whether amounts are material and therefore warrant reporting. DOD’s IG agreed with our recommendations that completion of the LMI study be accelerated, milestones established, and that DOD-wide policy be in place as soon as possible so that DOD can comply with the effective date of the standard. The IG expressed the view that DOD would not be prepared to implement the deferred maintenance requirements. Appendix I presents additional information on DOD’s implementation efforts. GSA worked with an independent accounting firm to develop an implementation report with recommended completion dates for several of the new federal accounting standards including SFFAS No. 6. This report recommended that GSA develop and implement a methodology for estimating and compiling deferred maintenance costs by the first quarter of 1998. At the time of our review, GSA had not developed an estimate of deferred maintenance and had not yet developed departmental guidance. GSA’s CFO stated that the agency had not yet determined whether condition assessment and/or life-cycle cost methodologies will be used, nor has it decided whether to distinguish between critical and noncritical assets. The IG, citing a lack of information, declined to express a view on whether GSA would be prepared to implement the deferred maintenance requirements. Appendix VI presents additional information on GSA’s implementation efforts. Similarly, State had contracted with a firm to provide recommendations on implementing the new federal accounting standards including SFFAS No. 6. At the time of our review, State had not developed departmental guidance on implementing the deferred maintenance requirements. According to the CFO, the department expects to develop a policy on deferred maintenance by April 1998. The IG believes that the department will be able to implement the deferred maintenance requirements for fiscal year 1998 but cautioned that until her office reviews the amounts, it cannot attest to their reliability. Appendix IX presents additional information on State’s implementation efforts. DOI plans to rely heavily on the findings of an internal working group in developing departmental guidance on the implementation of the deferred maintenance requirements of SFFAS No. 6. Since March 1997, a multibureau team at DOI has been studying issues surrounding the implementation of the deferred maintenance requirements. The Acting CFO reported that this team is expected to provide the agency with data on current and deferred maintenance as well as guidance on standard definitions and methodologies for improving the accumulation of necessary information. The Acting CFO believes that recommendations coming out of this team will call for uniform information and condition assessments which are supportive of the new standards. DOI is also working to standardize definitions and procedures throughout the agency. The Acting CFO stated that DOI intends to include deferred maintenance disclosures in its fiscal year 1997 Annual Report in advance of the fiscal year 1998 reporting requirements. DOI’s IG declined to express a view on whether the department would be prepared to implement the deferred maintenance requirements in fiscal year 1998. According to the IG, his office is planning to assess the deferred maintenance information provided in the department’s fiscal year 1997 financial statements. Thus, DOI’s early implementation approach should provide the agency with some indication of readiness to implement the deferred maintenance requirements for fiscal year 1998. Appendix V presents additional information on DOI’s implementation efforts. DOT is taking a decentralized approach to implementing the deferred maintenance requirements. The Deputy CFO reported that, where it is useful, DOT applies financial policies issued centrally within the Executive Branch with any necessary interpretation. DOT distributed SFFAS No. 6 to its operating administrations without additional guidance. Each operating administration will be responsible for determining how the deferred maintenance requirements will be implemented. The Deputy CFO indicated that his office will provide more detailed guidance on departmental reporting of deferred maintenance by issuing guidance for preparation of the fiscal year 1998 financial statements. The IG expressed concerns about the department’s approach and indicated that, in his view, DOT would not be prepared to implement the deferred maintenance requirements. According to the IG, although the operating administrations have the basic elements in place to implement the requirements, the department has not established a formal system to centrally identify or track deferred maintenance estimates. Further, the IG pointed out that the relevant operating administrations do not have an accurate accounting of all assets. Appendix III presents additional information on DOT’s implementation efforts. TVA and USPS follow private sector accounting standards in their financial statement reporting. However, since they are included in the consolidated financial statements of the U.S. government, TVA and USPS will be subject to reporting deferred maintenance under SFFAS No. 6 if their amounts prove material to the governmentwide financial statements. As of November 1997, neither USPS or TVA reported that it had been contacted by the Treasury with regard to the deferred maintenance reporting requirements for fiscal year 1998. A Treasury official confirmed that TVA and USPS have not been contacted regarding the fiscal year 1998 implementation of the deferred maintenance reporting requirements contained in SFFAS No. 6. Treasury officials are starting to address whether there are any significant reporting issues regarding how to include entities in the consolidated statements that are not required to follow federal accounting standards. The TVA CFO stated that his office was unaware of the deferred maintenance requirements. Although TVA has certain estimates of deferred maintenance, the CFO noted that TVA’s definition of deferred maintenance differs from that of SFFAS No. 6 and varies by category of asset. For example, he stated that for fossil fuels and hydro power, deferred maintenance is defined as repair work that is not performed on equipment if the problem has minor effects on the performance of that equipment. For building facilities, he stated that TVA defines deferred maintenance as maintenance that can be delayed indefinitely based on factors such as the change in the PP&E’s function, an increase/decrease in PP&E life expectancy, and the relationship between repair, replacement, or abandonment costs. However, should TVA be required to report deferred maintenance for the consolidated U.S. government financial statements, the CFO reported that TVA would comply and would not require significant preparation time. Appendix X presents additional information on TVA’s implementation efforts. The USPS CFO stated that the agency does not defer maintenance because of the potential effect such actions might have on employee safety and on-time mail delivery. The majority of USPS assets are buildings to house postal facilities, mail processing and computer equipment, and vehicles to move and deliver the mail. USPS standard maintenance plans are provided in its handbooks and policies and funding needs for maintenance are routinely addressed in its base budget. USPS has a schedule of useful lives for equipment, and local operating management has the authority to replace items when it is not cost effective to repair them. The USPS CFO also stated that the agency does not own any airplanes, railroad cars, or ships to move the mail; this is all done via contracts with commercial entities which also perform maintenance on this equipment. Appendix XI presents additional information on USPS implementation efforts. Agencies will be facing a number of challenges as they continue their implementation efforts. These challenges stem from the relative newness of the deferred maintenance requirements, the inherently difficult definitional issues associated with determining maintenance spending, the need for adequate systems to collect and track data, and, perhaps most importantly, the need for complete and reliable inventories of assets on which to develop estimates. As noted earlier, agencies are taking a variety of approaches to these issues. Improving PP&E reporting is a critical step to implementing the deferred maintenance requirements because deferred maintenance estimates are contingent upon a complete and reliable inventory of PP&E. Three agencies—DOD, USDA, and DOT—comprising 84 percent of total government reported PP&E received disclaimers of opinion in part because of difficulties reporting PP&E. A fourth agency, DOI received a qualified opinion due to the inability to support the reported PP&E amounts at one of its bureaus, the Bureau of Indian Affairs. Even agencies with unqualified opinions may need to continue efforts to improve PP&E reporting. For example, the fiscal year 1996 financial statement audit reports for both DOE and VA described internal control weaknesses that could adversely affect the departments’ future ability to accurately report PP&E. Thus, for many agencies, an appropriate step toward implementing the deferred maintenance requirements of SFFAS No. 6 is to improve their overall ability to identify and account for PP&E. DOD has received disclaimers of opinion on its fiscal year 1996 financial statements due in part to its inability to adequately account for its PP&E. DOD’s IG reported that control procedures over assets were inadequate and caused inaccurate reporting of real property, capital leases, construction in progress, inventory, and preparation of footnotes. USDA also received a disclaimer of opinion on its fiscal year 1996 financial statements due in part to its inability to report PP&E at the Forest Service. In an effort to improve the accuracy of PP&E reporting, the Forest Service in USDA is undertaking a complete physical inventory of PP&E. At the same time, the Forest Service Acting CFO stated that the agency plans to estimate deferred maintenance needs. The USDA IG concurs that a critical step to implementing the deferred maintenance requirements of SFFAS No. 6 for USDA is to develop complete inventories of its PP&E; without this inventory, the IG states that little reliance could be placed on estimates reported for deferred maintenance. Similarly, the DOT IG stated that DOT is focusing its efforts on correcting weaknesses identified in prior financial statement audits, particularly in the area of PP&E. Because deferred maintenance is linked to PP&E, the IG acknowledged that DOT needs to identify and validate PP&E before estimating amounts of deferred maintenance. At DOI, reporting issues are limited to the Bureau of Indian Affairs which in fiscal year 1996 could not provide adequate documentation or reliable accounting information to support $170 million in PP&E. DOI intends to include deferred maintenance disclosures in its fiscal year 1997 Consolidated Annual Report even though they are not required until fiscal year 1998. Since the IG plans to assess DOI’s fiscal year 1997 deferred maintenance disclosures, DOI’s early implementation approach should provide the agency with some indication of its readiness to implement the deferred maintenance requirements for fiscal year 1998. Even for agencies where independent audits indicated no report modifications pertaining to PP&E, the deferred maintenance requirement of SFFAS No. 6 presents a significant challenge. While providing useful new information for decision-making, the deferred maintenance requirements raise a number of new implementation and definitional issues—such as determining the acceptable condition of assets and the estimation methods to be used. The necessary flexibility in the standard increases the need for some departmental policies and guidance that are designed to be compatible with agency mission and organizational structure. Such departmental guidance could help ensure consistent reporting across agency units and facilitate the preparation of agencywide financial statements. However, the development of departmental guidance is complicated by the number and diversity of missions and assets even within a single agency. In determining the extent of additional departmental guidance to provide units, agencies must balance the desirability of consistent reporting with the need for flexibility. In addition, adequate data collection and tracking systems will be necessary to gather and verify information on deferred maintenance costs. However, as we have previously reported and as was acknowledged in June 1997 by both OMB and the Chief Financial Officers Council, the condition of agency financial systems remains a serious concern. Our past audit experience has indicated that numerous agencies’ financial management systems do not maintain or generate original data to readily prepare financial statements. Although some recent improvements have been made, agencies are still struggling to comply with governmentwide standards and requirements. Overcoming the above challenges to help ensure reliable and meaningful reporting at the departmental level is critical to the effective implementation of the deferred maintenance requirements. If effectively implemented, the new deferred maintenance reporting required by SFFAS No. 6 will improve information for decision-making. However, the deferred maintenance requirements present agencies with a significant challenge for which they must adequately prepare. While agencies have taken some initial steps to implement the deferred maintenance requirements, significant work remains in order for all agencies to effectively implement them on time. Moreover, agencies need to continue to address their systems problems so that this and other reporting requirements can be effectively met. Since agencies are most responsive to issues in which there is demonstrated interest, continued congressional and executive branch oversight would increase the chances of the standard being implemented successfully and on time. Monitoring of agency progress toward implementation, including the development of appropriate departmental guidance compatible with agency mission and organizational structure, could help ensure effective and timely implementation. Agency officials generally concurred with our conclusion that significant work remains to effectively implement the deferred maintenance reporting requirements under SFFAS No. 6. Several agencies provided comments of a technical nature which were incorporated into our report as appropriate. Two agencies, DOT and DOD, expressed reservations about certain sections of this report. The Deputy CFO for DOT indicated that a formal system for tracking deferred maintenance is not required by SFFAS No. 6. While technically correct, it is anticipated that deferred maintenance disclosures and estimates will be adequately documented by agencies. Accurate tracking of PP&E and deferred maintenance can be a valuable management tool for assessing program status and supporting resource allocation decisions on an ongoing basis. The Deputy CFO for DOT also did not believe that it was necessary to fully validate PP&E prior to estimating deferred maintenance. While we believe that the lack of an accurate accounting of PP&E will certainly impede any efforts to implement the deferred maintenance requirements, we agree that implementation of SFFAS No. 6 can—and should—proceed, even as agencies are continuing to validate their PP&E reporting. The Under Secretary of Defense, while stating that DOD is striving to comply with the reporting requirements, questioned if the benefits to be derived from reporting deferred maintenance will be proportionate to the effort required to obtain and report this information. He noted that deferred maintenance reporting represents a “snap-shot” at a specific time and stressed that, given the diversity of DOD’s PP&E systems, implementation of the requirements represents a significant challenge that will be costly in terms of both funding and personnel. He expressed concern that the report does not assess the impact on agencies of implementing and complying with the standard. Federal financial accounting standards, including the deferred maintenance requirements, are developed by FASAB using a due process and consensus building approach that considers the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information as well as comments from the public. DOD is a participant in these proceedings and has a member on FASAB. In its deliberations on deferred maintenance reporting, FASAB considered both the need to improve information on the condition of federal assets and the complexities of measuring and reporting this information. FASAB determined that deferred maintenance was a cost and that information on this cost was important to users. However, in recognition of measurement challenges and the limitations in the capacity of agency systems, FASAB developed the standard to provide entities flexibility in setting maintenance requirements and in establishing cost beneficial methods to estimate deferred maintenance amounts. The standard allows management flexibility to define deferred maintenance at a level meaningful for the agency. For example, acceptable asset condition is a management determination—the level of detailed information obtained is dependent on management’s determination of decisionmakers’ needs. As discussed in the Statement of Federal Financial Accounting Concepts No. 1 Objectives of Federal Financial Reporting, federal financial reporting is intended to address four broad objectives —budgetary integrity, operating performance, stewardship, and systems and controls. Disclosure of deferred maintenance amounts is consistent with these objectives. The systematic financial reporting of deferred maintenance can improve information on operating performance and stewardship and thus assist in determining how well government assets are maintained. In contrast, the usefulness of DOD’s current reporting of deferred maintenance through the budget process is more limited because estimates are developed on an ad-hoc basis and reporting is inconsistent among the military services and weapons systems. In addition, much of the data used are based on anticipated budgetary resources and not subjected to independent audit. The disclosure of deferred maintenance is an important management issue. Management should have this information throughout the year to assess the status of management programs and to support resource allocations on an ongoing basis. In the case of DOD, deferred maintenance applicable to mission assets, if reliably quantified and reported, can be an important indicator of mission asset condition (a key readiness factor) as well as an indicator of the proper functioning of maintenance and supply lines. Disclosure of deferred maintenance can also aid in supporting budget and performance measurement information. Because all financial reporting, including deferred maintenance, represents a “snap-shot” at a specific period in time—the value of financial reporting lies in the data collection and reporting systems developed to create and support the financial statements. Thus, the data systems that support financial statements can provide the capability for monitoring and managing day-to-day operations. We recognize that the deferred maintenance requirement presents DOD with a challenge in determining and disclosing reliable estimates of deferred maintenance—especially in addressing the broad range of financial management systems problems facing DOD. DOD’s longstanding system problems are repeatedly cited as reasons for inadequate financial information. Development of reliable systems should enhance DOD’s ability to meet financial reporting requirements, including the deferred maintenance disclosure requirement. The flexibility provided in the standard and the diversity of agency missions and assets increases the importance of department-level guidance to ensure consistent reporting. For example, we recently highlighted the key issues to be considered in developing guidance for disclosing deferred maintenance on aircraft. In particular, we noted that implementing guidance is needed so that all military services consistently apply the deferred maintenance standard. This means that DOD must address a number of issues, including (1) what constitutes acceptable condition, (2) determining whether—or how—to distinguish between critical and noncritical deferred maintenance, (3) determining when maintenance needed but not performed is considered deferred, and (4) whether deferred maintenance should be reported for assets that are not necessary for current requirements. In this and another report, DOD concurred with our statements citing the need for developing guidance promptly in order to ensure timely implementation of the deferred maintenance reporting requirements. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 15 days from its date. Then, we will send copies to the Ranking Minority Member of the Senate Appropriations Committee. Copies will also be made available to others upon request. Please contact me at (202) 512-9573 if you or your staff have any questions concerning this report. As of September 30, 1996, DOD reported PP&E of $772.9 billion. Of this amount, $586.5 billion is military equipment, $123.0 billion is structures, facilities, and leasehold improvements, and $63.5 billion is construction in progress and other types of general PP&E. DOD holds approximately 80.5 percent of the federal government’s reported PP&E. Problems with PP&E reporting contributed to a disclaimer of opinion on DOD’s fiscal year 1996 financial statements. In particular, DOD’s IG stated that control procedures over assets were inadequate and caused inaccurate reporting of real property, capital leases, construction in progress, inventory and preparation of footnotes. For fiscal year 1996, DOD received a disclaimer of opinion from the Office of the Inspector General based upon a scope limitation. The IG stated that although progress had been made, significant deficiencies in the accounting systems and the lack of a sound internal control structure prevented the preparation of accurate financial statements. The DOD Acting Comptroller stated that the agency intends to implement the new deferred maintenance requirements of SFFAS No. 6 as required for fiscal year 1998. The DOD Acting Comptroller reported that the agency has contracted with the Logistics Management Institute (LMI) to perform a study to assess existing DOD methods of determining, measuring and capturing deferred maintenance data on National Defense PP&E. DOD expects the LMI study to be completed in March of 1998; until then, the Acting Comptroller was uncertain whether additional changes would be required to achieve full implementation of the standard. For other types of general PP&E not addressed by the LMI study, the DOD Acting Comptroller reported that the agency is actively reviewing existing methods for reporting and tracing maintenance and deferred maintenance within the budget process to determine if modifications must be made or new reporting requirements developed to achieve full compliance with SFFAS No. 6. However, current maintenance and deferred maintenance estimates do not reflect SFFAS No. 6 PP&E categories. The Acting Comptroller also noted that the agency has identified an individual responsible for managing DOD’s effort to determine what and how reporting will be accomplished for the deferred maintenance requirements of SFFAS No. 6. In September 1997, the Deputy CFO issued a memorandum to DOD components stating that all eight of the accounting standards would be incorporated into the DOD Financial Management Regulation. As of November 1997, the DOD Acting Comptroller reported that the agency has not determined its measurement methodology, its application to classes of different assets, or whether it will report both critical and noncritical amounts of deferred maintenance. In all cases, the Acting Comptroller reported that these decisions will be made after the completion of the deferred maintenance study being conducted by LMI. For general PP&E, the DOD Acting Comptroller stated that the agency has not determined whether the amounts of deferred maintenance, other than for real property, are material and warrant reporting. The Acting Comptroller reported that DOD currently uses the condition assessment survey method for real property. The DOD Acting Comptroller reported experience with deferred maintenance reporting through estimates of deferred maintenance in exhibits that support the DOD budget request. For example, deferred maintenance on weapons systems is reported through the agency’s Depot Maintenance Program, while deferred maintenance of real property (buildings and facilities and housing units) is reported on budget exhibits as the backlog of maintenance and repair. The DOD Acting Comptroller stated that the agency believes that this process captures the majority of deferred maintenance but is not necessarily comprehensive of all deferred maintenance. The DOD Acting Comptroller reported that the agency’s deferred maintenance estimates are developed from lower echelon organizations as they build their individual budget requests and report on their level of maintenance activity. Estimates must then be consolidated at the departmental or component levels of DOD. The DOD Acting Comptroller stated that, with perhaps some modification, the agency’s deferred maintenance estimates would satisfy SFFAS No. 6 and departmental compliance requirements. The DOD Acting Comptroller stated that the primary challenges for DOD are (1) addressing the magnitude and diversity of DOD PP&E, (2) implementing and applying new standards and policy across the Military Services and Defense Agencies, each of which operates differently, and (3) modifying or establishing reporting requirements through existing or new automated systems. The DOD Acting Comptroller also indicated that new barriers to implementation may be identified when LMI completes its study. The IG stated that DOD will not be prepared to implement the deferred maintenance requirements. The IG cited GAO’s report which recommends that the study be expedited and asserted that it is virtually impossible for DOD to receive the study results from LMI, develop a policy and implementing guidance, and get all the information from the Military Services in time for preparation of the fiscal year 1998 financial statements. The IG stated that her office will only be able to audit any deferred maintenance amounts if DOD has issued a policy and implementing guidance. The IG indicated that her office would need to begin preliminary work no later than March 1998 in order to complete the audit on time. The IG stated that DOD’s size and diversity present special problems on two key issues: timeliness and consistency. And, the IG believes that DOD needs to issue specific guidance promptly to the Military Services to assure consistent application across the services. For fiscal year 1996, NASA reported general PP&E of $26.4 billion. Of this amount, $9.2 billion is space hardware; $5.9 billion is structures, facilities, and leasehold improvements; $5.0 billion is work in process; $5.0 billion is equipment; and the remaining $1.4 billion is land, special tooling and test equipment, and assets under capital lease. NASA holds approximately 2.7 percent of the federal government’s reported PP&E. For fiscal year 1996, NASA received an unqualified opinion on its financial statements from the independent public accountant (IPA) contracted for and monitored by the NASA Office of Inspector General. The IPA determined that the financial statements present fairly, in all material respects, the financial position of NASA as of September 30, 1996. The CFO stated that NASA will implement the new deferred maintenance requirements of SFFAS No. 6 for fiscal year 1998 as required. The NASA CFO reported that, while his office is exploring whether there is reportable deferred maintenance on other types of assets, the agency expects the new reporting requirements to primarily affect facilities maintenance for which it has a system that captures some deferred maintenance data. Other types of PP&E for NASA are not anticipated to report deferred maintenance. For example, program equipment or contractor-held property is considered mission critical and is subject to very stringent safety and quality measures. As a result, maintenance would need to be performed right away or the equipment would be replaced. The CFO believes that minor adjustments to NASA’s facilities maintenance system will allow it to meet the new reporting standard. The NASA CFO has also designated an individual in charge of compliance issues related to the deferred maintenance requirements of SFFAS No. 6. The NASA CFO reported that agency policy will be developed and documented by June 30, 1998; however, NASA does not plan to change its overall approach to reporting deferred maintenance. NASA also uses and will continue to use the condition assessment method in determining levels of deferred maintenance for facilities. Although the CFO stated that the agency plans additional work to determine whether the same methodology will be used for all assets, the CFO is fairly confident there will be no deferred maintenance on these items. The CFO for NASA reported that the agency currently has an estimate of deferred maintenance on facilities as of the end of fiscal year 1996. The CFO reported that NASA policy requires that Centers continuously assess facility conditions in a manner which results in an appropriate identification and quantification (in terms of dollars) of the backlog of maintenance and repair. Once deficiencies are identified, industry standard estimating guides are used to arrive at estimated repair costs. Estimates of deferred maintenance have not been validated; the CFO stated that NASA is reviewing SFFAS No. 6 to determine if additional work is needed to comply with the deferred maintenance requirements. The NASA policy regarding facilities maintenance is a public document available on the Internet. The CFO stated that a key challenge for NASA lies in implementing the deferred maintenance requirements and other accounting standards at the same time that it is implementing a wholly new, integrated financial system and a full cost accounting, budgeting and management system with declining human resources. However, the CFO stated that the agency believes it has the appropriate expertise to make maintenance estimates. In cases where workloads necessitate additional resources, the CFO for NASA reported that the agency could use contractor assistance. Based upon NASA’s audit history, the IG believes that NASA will be able to implement the deferred maintenance requirements as required by fiscal year 1998. The IG reported that a key issue in auditing deferred maintenance reporting for NASA will be to determine whether the measurement method was properly and consistently applied across the different Centers. For fiscal year 1996, DOT reported general PP&E of $24.4 billion. Of this amount, $10.4 billion is structures, facilities, and leasehold improvements; $8.5 billion is equipment; $3.4 billion is construction in progress; $1.6 billion is aircraft; and the remaining $0.4 billion is in land, assets under capital lease, ADP software, and other PP&E. DOT holds approximately 2.5 percent of the federal government’s reported PP&E. Problems with PP&E reporting contributed to a disclaimer of opinion on DOT’s fiscal year 1996 consolidated financial statement. In particular, the IG’s report cited PP&E as a material weakness, stating that several DOT operating administrations did not (1) report all PP&E that should be reported, (2) maintain accurate subsidiary property records, (3) retain documentation to support the value of property and equipment, (4) reconcile subsidiary property records with general ledger property and equipment accounts, and (5) post property and equipment transactions to the proper general ledger asset accounts. For fiscal year 1996, DOT received a disclaimer of opinion on its Consolidated Statement of Financial Position from the IG. The IG noted that several operating administrations had not reconciled all general ledger balances to subsidiary records, affecting the Property and Equipment and Operating Materials and Supplies accounts. Also, a lack of records meant that the IG could not determine whether the balances reported for the corresponding material line items were fairly presented. In addition, the IG found that operating administrations were expensing amounts that should have been capitalized, resulting in an understatement of assets. The Deputy CFO for DOT stated that the agency intends to report deferred maintenance in its fiscal year 1998 financial statements. The Deputy CFO for DOT stated that implementation of the deferred maintenance requirements of SFFAS No. 6 occurs in the Maritime Administration (MARAD), the Federal Aviation Administration (FAA), and the U.S. Coast Guard (USCG). Since the requirement was not effective until fiscal year 1998, the Deputy CFO reported that his office has been focusing efforts to correct material weaknesses identified in prior financial statement audits, particularly in the area of PP&E. However, the Deputy CFO stated that implementing the deferred maintenance requirements, although impaired because of the inability to support all of the account balances with subsidiary detail, does not preclude an agency from implementing the standards. He further noted that for the validated PP&E, implementation can proceed and estimates may be made where aggregate asset information is available. Also, the Deputy CFO stated that in cases where assets may not have been fully validated for financial statement purposes, he believes that deferred maintenance estimates may also be implemented. He described these actions as an evolving process that will continue to improve with increased data accuracy and integrity. The Deputy CFO for DOT reported that a key unresolved question for the agency is how to place assets in proper categories for financial reporting on deferred maintenance. He also stated that he would be interested in guidance regarding acceptable reporting formats. The Deputy CFO has designated an individual in charge of compliance with this new reporting standard. The Deputy CFO for DOT reported that the agency has distributed SFFAS No. 6 to departmental CFO and accounting offices. He further noted that the CFO office will provide more detailed guidance on departmental reporting at a later date and that any additional policy would be limited to supplementary information used to clarify areas not clearly defined in accounting standards or by the central agencies (e.g., Treasury or OMB). The Deputy CFO reported that DOT is likely to use both condition assessment and life-cycle cost, depending on the operating administration and the classes of assets. He stated that MARAD has documented and communicated the reporting requirements to staff and contractors who need to make the estimate, while other operating administrations have not completed this step. The Deputy CFO reported that three relevant operating administrations currently have standards which define “acceptable condition” for major assets. The Deputy CFO reported that each of the three operating administrations for which major deferred maintenance occurs has its own maintenance plans for different types of PP&E. MARAD ships and facilities undergo periodic inspections which typically result in the discovery of deficiencies. If funds are available, the appropriate repairs are made; otherwise, the requirements are made known and funds are requested during the normal budget process. In some cases, this process is automated and spending plans are developed from the database of information. The Deputy CFO reported that FAA relies on the General Maintenance Handbook for Airway Facilities, which addresses maintenance requirements, policies, and procedures for specific assets. FAA estimates of deferred maintenance only address certain areas such as the modernization of building and equipment capital investment plan. Finally, the Deputy CFO reported that the USCG has standard maintenance plans/requirements for its aviation, naval, electronic, and shore assets. USCG does not currently have an estimate of deferred maintenance. According to the Deputy CFO, USCG uses these plans to (1) ensure that priority maintenance is accomplished, (2) estimate budget requirements, (3) plan and program resources to meet mission objectives, and (4) ensure that total expenditures stay within budgetary limits. The Deputy CFO cited three key challenges to implementing the deferred maintenance requirements of SFFAS No. 6, namely (1) identifying and validating an inventory of existing PP&E, (2) obtaining and allocating the resources necessary to estimate, document, and report deferred maintenance requirements, and (3) ensuring a system that centralizes the data repository for management and analysis to meet deferred maintenance reporting requirements. The DOT IG does not believe that DOT will be prepared to implement the deferred maintenance requirements. Based on his office’s experiences in auditing property and equipment at DOT, the IG is concerned that amounts for deferred maintenance reporting may not be adequately supported and documented. The IG stated that in order to effectively implement the deferred maintenance requirements, the operating administrations must complete physical inventories of assets to determine what is actually owned and properly account for these assets. The IG stated that DOT must issue implementing guidance for SFFAS No. 6 which should identify the types of assets qualifying for deferred maintenance. Finally, the IG stated that DOT, in coordination with the operating administrations, must establish an internal control system that requires documenting the process of estimating amounts of deferred maintenance and accurately reporting these estimates. The IG stated that the key issues in auditing the deferred maintenance amounts are that (1) each operating administration obtain an accurate inventory of property and equipment, (2) DOT have formal written policies on deferred maintenance to ensure consistency among the operating administrations, and (3) DOT establish a formal system to effectively track and adequately document amounts reported for deferred maintenance cost. For fiscal year 1996, DOE reported general PP&E of $22.0 billion. Of this amount, $11.9 billion is structures and facilities, $5.9 billion is equipment, $3.7 billion is construction work in process, and the remaining $0.5 billion is land, ADP software, and natural resources. DOE holds approximately 2.3 percent of the federal government’s reported PP&E. For fiscal year 1996, DOE received an unqualified opinion on its financial statement from the DOE IG. The IG noted that the financial statements present fairly, in all material respects, the financial position of the department as of September 30, 1996. The IG did note, however, in their report on Internal Control Structure accompanying the financial statements, that the department needed to strengthen its internal control system over PP&E. The Acting CFO stated that DOE plans to implement the deferred maintenance requirements of SFFAS No. 6 in fiscal year 1998 as required. In preparation for implementing the standard, the Acting CFO reported that DOE has reviewed the deferred maintenance requirements and considers that, for the most part, the agency is complying with the requirements. For example, DOE’s Life Cycle Asset Management Order (LCAM) currently requires estimates of deferred maintenance, specifically requiring the management of backlogs associated with maintenance, repair, and capital improvements. The Acting CFO stated that DOE is determining whether existing policy requirements are sufficient to meet SFFAS No. 6 as well as to develop a cost-effective approach for accumulating data from DOE field offices. The Acting CFO has also designated an individual point of contact for meeting the deferred maintenance disclosure requirement. DOE has policies in place that require field offices to estimate and document deferred maintenance amounts. However, the Acting CFO reported that there is no requirement for reporting this information to the department’s headquarters. She further stated that DOE is in the process of determining the most cost-effective process for accumulating this information from the field offices and reporting it to headquarters. She stated that DOE uses condition assessment methods to determine its deferred maintenance estimates. While DOE did not provide a departmental estimate of deferred maintenance, it does have experience tracking and estimating maintenance backlogs. DOE’s LCAM order establishes minimum requirements for asset management. One requirement of this order calls for the management of backlogs associated with maintenance, repair, and capital improvements, which DOE considers to be synonymous with deferred maintenance. The Acting CFO for DOE reported that approximately 50 percent of agency assets are managed using a Condition Assessment Survey (CAS) program which follows industry standards and inspection methods. Sites using CAS can determine their deferred maintenance or maintenance backlogs from the inspection and cost estimating features of the program. Unautomated sites would base the deferred maintenance estimates on their site-specific facility inspection program. Sites annually report to individual Operations Offices estimates of deferred maintenance as part of their performance indicators. These estimates are periodically sampled by on-site individual evaluators. The Acting CFO stated that DOE’s key challenges are to (1) confirm that field sites are complying with deferred maintenance policies, (2) ensure that proper databases, inspection procedures, and cost estimating programs are being used to calculate deferred maintenance, and (3) determine the appropriate level (i.e., materiality) of the inclusion of personal property. Based upon DOE’s representations regarding its plans, the Deputy IG stated that it appears that the disclosure will be auditable. However, the Deputy IG believes it is too early to make a definitive judgment on the readiness and capability of DOE to implement the deferred maintenance requirements. The Deputy IG stated that during the fiscal year 1996 audit, his office determined that a number of locations within DOE lacked the ability to specifically identify amounts spent on repair and maintenance. The IG office will have the capacity to audit the deferred maintenance amounts, but stresses that doing so will further diminish its capacity to provide audit coverage for high risk areas designated by GAO and OMB. He also noted that coverage of high risk areas continues to decline because of diminishing personnel resources and the continued increase in statutory audit work. The Deputy IG stated that audit work related to the deferred maintenance requirement will generally address three key questions: (1) Has DOE developed cost-effective policies and procedures for developing the estimate required to support the disclosure? (2) Is the appropriate level of expertise applied to developing the estimate? and (3) Is the overall estimate reasonable, properly documented, and readily verifiable? For fiscal year 1996, DOI reported general PP&E of $16.6 billion. Of this amount, $15.9 billion is land, buildings, dams, structures and other facilities and the remaining $0.7 billion is vehicles, equipment, aircraft and other property, plant, and equipment. DOI holds approximately 1.7 percent of the federal government’s reported PP&E. Documentation and internal control deficiencies with PP&E contributed to a qualified opinion on DOI’s fiscal year 1996 financial statements. The audit opinion of the IG also stated that internal controls for PP&E at the Bureau of Indian Affairs and the National Park Service did not ensure that transactions were properly recorded and accounted for to permit reliable and prompt financial reporting. DOI received a qualified opinion on its fiscal year 1996 financial statement from its IG because the Bureau of Indian Affairs could not provide adequate documentation or reliable accounting information to support $170 million for other structures and facilities, $17 million in accounts receivable, $136 million of revenue, and $19 million of bad debt expense. The Acting CFO for DOI stated that the agency will implement the accounting requirements for deferred maintenance in fiscal year 1998. The agency also noted that it intends to include deferred maintenance disclosures in its fiscal year 1997 financial statements. To implement the deferred maintenance requirements, the Acting CFO stated that he is relying heavily on the work of the Facilities Maintenance Study Team and other agency officials charged with coordinating implementation. In March 1997, this multibureau team was tasked with seeking better methods of determining, validating, and correcting maintenance and repair needs. The Acting CFO reported that he expects the Facilities Maintenance Study Team’s report to provide the agency with current and deferred maintenance information as well as guidance on standard definitions and methodologies for improving the ongoing accumulation of current and deferred maintenance information. The Acting CFO for DOI stated that his office has not yet issued guidance for implementing the deferred maintenance reporting requirements of SFFAS No. 6. However, he noted that the work of the Facilities Maintenance Study Team is currently addressing how to standardize definitions and procedures throughout the department. The Acting CFO stated that his office has determined that the condition assessment method will be used to estimate deferred maintenance for all types of PP&E. Because information on deferred maintenance will come from individual bureaus within DOI, the Acting CFO reported that he plans to establish an appropriate working group to define condition assessment criteria and procedures for different facility types to further improve the comparability of the information generated. While the Acting CFO does not plan to distinguish between critical and noncritical assets in DOI’s consolidated statements, he noted that the Bureau of Reclamation does make a distinction between critical and noncritical deferred maintenance. The Acting CFO stated that DOI has experience reporting and tracking maintenance spending and deferred maintenance through the budgetary process. Each bureau has established standards and a methodology for determining which PP&E is not in acceptable condition due to deferred maintenance. However, the Acting CFO reported that DOI’s past maintenance funding requests have been tempered by available budgetary resources. Given the varying missions of the agency, “acceptable condition” has had (and will continue to have) different meanings for each bureau. The Acting CFO believes that the key challenges to implementing the deferred maintenance reporting requirements are (1) developing consistent terminology and data among the bureaus, (2) integrating the data requirements into maintenance and accounting systems, and (3) validating estimates for accuracy, while working within limited human and financial resources. The IG would not express an opinion on whether DOI would be prepared to implement the deferred maintenance requirements of SFFAS No. 6 in fiscal year 1998. The IG noted, however, that past IG work has addressed the need for several bureaus within DOI to gather maintenance information so that maintenance programs could be better managed. The IG stated that his office is planning to perform a preliminary assessment of the deferred maintenance information provided by DOI in its fiscal year 1997 financial statements. Further, deferred maintenance will be included in the IG audit of the fiscal year 1998 financial statements. Because audit requirements for deferred maintenance have not been issued, the IG stated that his office cannot make a determination regarding the skills or abilities required for its audit of information reported in accordance with this standard. However, the IG stated that it will require additional resources to audit the added financial statement requirements and cost accounting standards. The key issue in auditing deferred maintenance will be the adequacy of systems established by DOI to gather complete and accurate data. As of September 30, 1996, GSA reported general PP&E of $12.1 billion. Of this amount, $6.9 billion is buildings and leasehold improvements, $2.4 billion is construction in process, $1.7 billion is motor vehicles and other equipment, $1.0 billion is land, and the remaining $0.1 billion is telecommunications and ADP equipment. GSA holds approximately 1.3 percent of the federal government’s reported PP&E. For fiscal year 1996, GSA received an unqualified opinion on its financial statements from an IPA contracted for and monitored by the GSA Office of Inspector General. The IPA stated that the financial statements present fairly, in all material respects, the financial position of GSA. The GSA CFO stated that the agency intends to implement the accounting requirements for deferred maintenance in fiscal year 1998. He believes that agency reporting would be more meaningful and consistent if specific guidance relating to certain asset types would be provided. In this regard, he thinks that policy and coordination could be provided by the central agencies such as OMB and the Treasury, with additional guidance from GSA’s Office of Policy, Planning, and Evaluation. However, the GSA CFO noted that he would expect the individual services within GSA to define deferred maintenance and determine a methodology for reporting whether or not additional guidance is pending. GSA has also designated an individual responsible for leading GSA’s efforts to implement this standard. GSA reported that policies for implementing the new accounting standard were inherent in its Agency Accounting Manual and in financial statement preparation guidance. GSA also provided an implementation guidance package prepared by an independent accounting firm which includes actions to be taken and recommended completion dates in order to achieve timely implementation of the standard. The CFO stated that GSA has not determined whether it will use condition assessment or life-cycle cost to estimate deferred maintenance, nor has it determined whether it will distinguish between critical and noncritical assets. While the CFO reported that GSA does have a universe of maintenance needs, he also stated that it does not differentiate between deferred and nondeferred maintenance, nor does it have an agencywide standard maintenance plan. Instead, decisions regarding the level and frequency of PP&E maintenance are established by each GSA service. According to the CFO, each service performs maintenance both on a scheduled and as needed basis. For example, the CFO noted that the Public Buildings Service has maintenance plans for specific buildings in its inventory and that maintenance spending is tracked against available resources, and funds for maintenance needs are included as part of GSA’s annual budget request. The CFO also reported that each service determines if PP&E is in acceptable condition as well as when each type of PP&E ceases to be functional. The CFO considers the key challenges to implementing the deferred maintenance requirements of SFFAS No. 6 to be (1) developing consistent terminology and data among the services, (2) integrating the data requirements into maintenance and accounting systems, and (3) working with limited human and financial resources. The IG stated that at the present time, there is not enough information to form an opinion as to whether GSA will meet the deferred maintenance requirement. However, the IG stated that he is encouraged by the steps the agency has taken to date and its overall commitment to financial statement reporting. The IG noted that his office contracts with an IPA to perform the financial statement audit of GSA; hence, the IG expects to have the resources, skills, and abilities needed to audit the deferred maintenance amounts. The IG stated that the key issues in auditing deferred maintenance are those of definition and completeness. He believes that it will be necessary—but may be difficult—for the agency to obtain agreement within GSA as to when an asset is considered to be in acceptable condition. Further, he noted that GSA’s internal control structure will need review to determine whether GSA has properly included all classes of assets for purposes of calculating deferred maintenance amounts. For fiscal year 1996, VA reported general PP&E of $11.1 billion. Of this amount, $7.1 billion is buildings, $1.9 billion is equipment, $1.2 billion is construction in progress, and the remaining $0.8 billion is land and other PP&E. VA holds approximately 1.2 percent of the federal government’s reported PP&E. For fiscal year 1996, the VA Acting IG rendered an unqualified opinion on the VA’s Statement of Financial Position and a qualified opinion on the Statement of Operations because his office was unable to satisfy itself as to the opening balances recorded for net PP&E and net receivables. The Acting IG stated that the Statement of Financial Position presented fairly, in all material respects, the financial position of VA. However, the Acting IG cited six reportable conditions that could adversely affect VA’s ability to record, process, summarize, and report financial data. One condition cited was the need for VA to continue its efforts to refine property, plant, and equipment records. In particular, the Acting IG noted problems with correctly recording depreciation and capitalizing assets and data entry errors. The VA CFO stated that the agency will implement SFFAS No. 6 in 1998 and is in the process of developing a formal plan for implementing the deferred maintenance requirements. He also noted that his office is developing guidance to communicate deferred maintenance reporting requirements for fiscal year 1998. The CFO stated that VA does not defer maintenance on medical devices or critical hospital systems but that deferred maintenance exists on noncritical building systems such as parking lots, roads, grounds, roofs, and windows. The CFO has designated an individual in charge of implementing the reporting requirements for deferred maintenance. The CFO provided a copy of a draft policy intended to provide financial accounting policy for PP&E, noting that this policy is currently being circulated throughout the department for concurrence. VA’s draft policy defines deferred maintenance as in SFFAS No. 6 and reiterates the reporting requirements for a footnote disclosure. The CFO also stated his intent to provide more detailed guidance to individual units within the agency. The CFO plans to set up a statistical account in VA’s general ledger to record deferred maintenance information as policy is implemented throughout the department. The CFO is leaning towards establishing condition assessment as the method for determining deferred maintenance and plans to use the same methodology for all types of assets. The CFO also intends to require each individual unit within VA to review and classify its deferred maintenance into the various categories of PP&E and then record material amounts in various representative statistical accounts. These categories and totals are then planned to be rolled up to the departmental level, thus providing the basis for meeting the disclosure requirements of SFFAS No. 6. The CFO noted that VA’s experience with deferred maintenance reporting is largely decentralized. While the VA does not have an agencywide maintenance plan, it does have a standard policy requiring maintenance of PP&E. For example, VA Medical Centers follow maintenance schedules developed from equipment manufacturer’s information. Each VA unit estimates its operational funding, which includes maintenance. Individual VA administrations determine the acceptable condition of each type of PP&E based on its mission. In the case of critical hospital equipment, the VA CFO asserted that maintenance is not deferred since a hospital environment requires that equipment be operational and items must be kept in working condition for safety and reliability reasons. The CFO reported that VA’s biggest challenges in implementing the deferred maintenance requirements will be to (1) get VA’s several hundred units to provide comparable deferred maintenance data and (2) determine the most efficient and effective method to estimate and report required data for deferred maintenance. The Acting IG expressed confidence that VA will implement the deferred maintenance requirement. The Acting IG stated that this confidence is based on the CFO’s progress over the past 4-5 years addressing issues raised by the IG in audits. The Acting IG cited VA improvements in accounts receivable and PP&E as examples. The Acting IG stated that his office will audit a deferred maintenance amount if it is a material item. The Acting IG believes that his office has the necessary resources or would be able to get an independent engineer or consultant to assist in evaluating any deferred maintenance issues beyond the expertise of his office. The Acting IG believes that the key audit requirements are an audit trail, a good system of information, and the ability of his office to test the records. The Acting IG stated that one challenge will be whether the VA issues ground rules to facilities so that consistency will occur among the 173 Medical Centers and other units. The Acting IG also noted that it is difficult to draw a line between critical and noncritical assets and that this distinction may vary among different administrations of VA. Thus, the Acting IG stated that his office would need to review each administration’s definition of “critical.” For fiscal year 1996, USDA reported general PP&E of $8.6 billion. Of this amount, about $5.9 billion is land; $1.8 billion is structures, facilities, and leasehold improvements; $0.8 billion is equipment; and the remaining $0.1 billion is ADP software and other PP&E. USDA holds approximately 0.9 percent of the federal government’s reported PP&E. Problems with PP&E reporting contributed to a disclaimer of opinion on USDA’s fiscal year 1996 consolidated financial statement. In particular, the Forest Service, which has over 90 percent of USDA’s PP&E, was unable to provide complete auditable financial statements for fiscal year 1996. The IG also noted that problems with fiscal year 1995 Forest Service reporting resulted in an adverse opinion due to pervasive errors, material or potentially material misstatements, and/or departures from applicable accounting principles. For fiscal year 1996, the USDA IG issued a disclaimer of an opinion on the consolidated financial statements of USDA. The IG stated that his office did not attempt to audit the statement of financial position and related statements because of problems with Forest Service reporting. In addition to the Forest Service being unable to provide complete auditable financial statements, the IG also stated that the Secretary of Agriculture reported that the department could not provide assurance that, as a whole, agency internal controls and financial management systems comply with Federal Managers’ Financial Integrity Act (FMFIA) requirements. As a result, USDA’s fiscal year 1996 consolidated financial statements were prepared using the Forest Service’s fiscal year 1995 account balances and activity. With regard to Forest Service accounting for PP&E, the IG cited inadequate documentation, pervasive instances of errors, and material control weaknesses. The Acting CFO for USDA stated that the agency elected to implement SFFAS No. 6 beginning in fiscal year 1997 and anticipates it will be substantially implemented for fiscal year 1998 as required. His office provided its draft plan for implementing the standard which is outlined in chapter 9 (“Property, Plant, and Equipment”) of the USDA Financial and Accounting Standards Manual (FASM). The Acting CFO noted that this chapter conforms with SFFAS No. 6, but does not include milestones. He stated that suitable milestones will be developed during fiscal year 1998. With regard to designating an official responsible for ensuring the implementation of the deferred maintenance requirements, the Acting CFO stated that he makes departmentwide determinations on financial statement reporting and oversees compliance with SFFAS No. 6. In addition, he has also identified responsible officials in the two agencies that control the material portion of USDA PP&E—Agriculture Research Service (ARS) and the Forest Service. The Acting CFO believes that his agency needs additional guidance in the following areas: (1) related OMB budget requirements related to maintenance and deferred maintenance, (2) general criteria for critical and noncritical deferred maintenance, (3) setting priorities for deferred maintenance, (4) minimum standards for condition assessment surveys and for life-cycle costing, (5) criteria for distinguishing between deferred maintenance and reconstruction, and (6) the point at which a need to perform deferred maintenance becomes a need for a new asset. USDA has established broad policies for implementing the deferred maintenance requirements of SFFAS No. 6. The Acting CFO stated that rather than arbitrarily restricting the USDA agencies to only one method of estimating deferred maintenance, USDA policy allows the use of any of the methods described in SFFAS No. 6. The Acting CFO believed that the agencies within USDA are in the best position to determine which of the allowable methods is most appropriate for their particular agency. The Acting CFO stated that these policies provide a great deal of discretion to individual agencies within USDA. For example, each agency has the option of selecting condition assessment and/or life cycle cost methodologies as allowed by SFFAS No. 6. The USDA Acting CFO expressed his belief that the agency has described the allowable measurement methodologies (e.g., condition assessment or life-cycle cost) but stated that it must give more attention to communicating the methodologies to those making the estimates. He stated that his office intends to more clearly communicate methodology requirements during fiscal year 1998. The FASM does establish mutually exclusive major classes and base units of measurement for general, heritage, and stewardship land PP&E. It also requires that management distinguish between critical and noncritical deferred maintenance and disclose the basis for distinguishing between the two. Both agencies with the material portion of USDA PP&E—Forest Service and ARS—have some experience with deferred maintenance reporting and have estimates for deferred maintenance. We were told that ARS has a standard maintenance plan for facilities that is called the ARS 10-Year Facility Plan. The USDA Acting CFO stated that the plan provides the framework for future decision-making, setting priorities, and allocating resources to implement necessary improvement, maintenance, modernization, and repairs to ARS research facilities. We were also told that ARS uses information from USDA’s Central Accounting System to update its 10-Year Facility Plan each year and to ascertain that spending is in accordance with congressional intent. According to the Acting CFO, ARS routinely estimates its funding needs for repairs and maintenance each fiscal year. In contrast, individual Forest Service managers are responsible for assessing the condition of their PP&E and obtaining the funding needed for maintenance. We were told that Forest Service managers also inspect the actual condition of PP&E, using a building and facility handbook that broadly defines maintenance levels ranging from level 1, not in operation, to level 5, major offices and high-use areas. ARS and Forest Service provided deferred maintenance estimates. The Acting CFO for Forest Service cautioned that the Forest Service estimates were overstated per SFFAS No. 6 because they included activities needed to expand, upgrade, or reconstruct a facility. The USDA’s Acting CFO noted that both agencies are scheduled to update these estimates for fiscal year 1998. He believes that the current estimates of deferred maintenance may not comply fully with SFFAS No. 6. According to USDA’s Acting CFO, the department’s key challenges to comply with the new deferred maintenance accounting requirements include (1) communicating applicable requirements, (2) documenting the correct and consistent use of allowable methodologies, (3) establishing an accurate physical inventory of PP&E, and (4) obtaining guidance from central agencies. The IG stated that, assuming that agencies continue to emphasize financial management, and the Forest Service completes its inventory of PP&E, USDA should be prepared to implement the requirement for fiscal year 1998. At the time of our review, the IG noted that the department has a draft policy for implementing the deferred maintenance requirements that adequately mirrors requirements addressed by SFFAS No. 6. However, the IG also emphasized that deferred maintenance estimates are contingent on a complete and accurate inventory of PP&E. While efforts are underway within the Forest Service (the largest PP&E holder for USDA) to develop complete inventories and supportable valuations for its PP&E, the IG stated that these efforts may or may not be completed by the end of fiscal year 1998. The IG said that if the Forest Service cannot complete its efforts and accurately report PP&E in its financial statements, little reliance can be placed on estimates reported for the associated deferred maintenance. The IG stated that the key issues in auditing the deferred maintenance requirements are determining if the estimation methodology is appropriate, applied correctly, applied consistently, and adequately documented. And, since deferred maintenance estimates are contingent on a complete and accurate inventory of PP&E, audit results addressing PP&E will play an important role in his office’s evaluation of the deferred maintenance estimates. For fiscal year 1996, the Department of State reported general PP&E of $4.6 billion. Of this amount, $2.3 billion is land and land improvements, $1.9 billion is capital improvements and buildings, structures, facilities, and leaseholds, and the remaining $0.3 billion is construction in progress and vehicles and other equipment. State holds approximately 0.5 percent of the federal government’s reported PP&E. For fiscal year 1996, State received a qualified opinion on its consolidated financial statements from an IPA. The IPA noted that, except for any adjustments that might have been necessary had it been possible to review undelivered orders, State’s 1996 Consolidated Statement of Financial Position is presented fairly in all material respects. The IPA also stated that the undelivered orders scope limitation did not affect the Consolidated Statement of Financial Position for the department. The State CFO intends to implement the deferred maintenance requirements for the fiscal year 1998 financial statements. He cited several actions by the agency to prepare for deferred maintenance reporting. The CFO reported that State has contracted with a firm to provide recommendations on implementing the new federal accounting standards, including SFFAS No. 6. The CFO has also designated individuals from the Bureau of Finance and Management Policy to determine the reporting policy and (in conjunction with the IG Office) oversee compliance. The CFO for State reported that the agency has not yet developed a formal policy on implementing the deferred maintenance requirements of SFFAS No. 6, but expects to develop a policy by April 1998. The CFO stated that he expects, but has not decided, to use the condition assessment method, because the process is substantially the same as the one now used for determining maintenance requirements. The CFO further noted that he does not plan to disclose critical and noncritical assets on the financial statements. The CFO reported that State has experience with estimating maintenance needs through the budget process. For example, he noted that each overseas mission prepares an annual budget for routine maintenance and repair and special maintenance requirements and submits it to the Office of Foreign Buildings Operations (A/FBO). The CFO said that A/FBO allots funds to overseas missions to carry out maintenance activities. He also reported that appropriations for specific maintenance projects are requested as line items and are included in the functional programs’ budget submissions to OMB and the Congress. He noted that A/FBO then compares actual costs of implementing these maintenance projects with the budget established for their execution. The CFO reported that currently State has the results of condition assessment and other surveys recorded in a database. Offices review, determine priority ratings, and develop cost estimates to implement requirements. The CFO reported that these priority ratings are used to balance requirements with available resources. Unfunded requirements are then listed as State’s maintenance and infrastructure repair deficit. He cautioned, however, that State’s backlog includes repair and replacement requirements and minor property improvements, and therefore goes beyond deferred maintenance. He further stated that routine maintenance requirements at individual posts are not broken out or tracked separately. In addition, the CFO noted that State’s estimates may not separate deferred maintenance from current maintenance requirements. In addition, he noted that the current estimates also include some improvements and other projects that would not fall within the scope of the accounting standard’s definition of deferred maintenance. The CFO described the primary problem of implementing the deferred maintenance requirements of SFFAS No. 6 as being one of definition. For example, he noted problems with defining deferred maintenance and determining when maintenance requirements are past due. He also cited the complexity and cost of maintaining current data for a program responsible for management oversight of more that 3,000 properties in 260 locations worldwide. Based upon the work that the CFO’s staff has done, the IG believes that State will be able to implement the deferred maintenance requirement. However, until her office reviews the deferred maintenance information reported for fiscal year 1998, the IG is unable to determine if the information provided will be reliable. The IG reported that work with the CFO has focused on establishing a process to track deferred maintenance so that auditable information would be available for the fiscal year 1998 financial statements. The IG stated that her office would be able to audit the amount presented on the fiscal year 1998 statements and she anticipates that she will have adequate resources and abilities to review the deferred maintenance amount. Although the IG has not yet developed an audit plan for deferred maintenance, she noted that key issues for auditing the deferred maintenance amounts included reviewing the (1) methodology established to value deferred maintenance, (2) qualifications of officials making the determination of the value of deferred maintenance, and (3) completeness and accuracy of the deferred maintenance amounts. TVA follows private sector practices in its financial statement reporting. However, TVA is included in the governmentwide financial statements and will be subject to reporting deferred maintenance under SFFAS No. 6 if reported amounts prove material to the governmentwide statements. As of September 30, 1996, TVA reported general PP&E of $30.4 billion. Of this amount, $22.2 billion is completed plant, $6.3 billion is deferred nuclear generating units, $1.1 billion is nuclear fuel and capital lease assets, and $0.8 billion is construction in progress. As of September 30, 1996, TVA held approximately 3.2 percent of the federal government’s reported PP&E. For fiscal year 1996, TVA received an unqualified opinion from an IPA. The IPA found that TVA’s financial statements present fairly, in all material respects, the financial position of the power program and all programs of TVA. Prior to being contacted by GAO, the TVA CFO stated that he was unaware of the SFFAS No. 6 reporting requirement for deferred maintenance. The CFO noted that TVA does comply with the U.S. Treasury’s request for TVA financial statement information which is then consolidated within the U.S. government financial statements. However, the CFO stated that as of December 1997, TVA had not heard from Treasury regarding compliance with the deferred maintenance requirement in SFFAS No. 6. repair, replacement, or abandonment costs. For land management, TVA defines deferred maintenance as the delay or postponement of needed repairs or refurbishment, and maintenance which was not performed at the scheduled time and continues to accumulate. However, should TVA be required to report deferred maintenance for the consolidated U.S. government financial statements, the CFO reported that TVA would comply. The CFO indicated that TVA could provide an estimate of deferred maintenance with about 2 months notice. The TVA CFO reported that, for the most part, the agency does have standard maintenance plans for its different types of PP&E. For example, the very nature of the agency’s nuclear power program requires sites to comply with Nuclear Regulatory Commission regulations to ensure that plants can operate safely and that equipment is not degraded. He also noted that fossil and hydroelectric fuel programs have standard maintenance plans and use an automated Maintenance Planning and Control system for scheduling and tracking of maintenance work. The TVA CFO noted that the agency has the systems to track PP&E related to fossil fuel and building facilities deferred maintenance, but does not have data on hydroelectric power, transmission of power, land management, and certain recreation areas. The CFO also noted that TVA has qualified in-house expertise in all operations areas who could estimate deferred maintenance. The USPS follows private sector practices in its financial statement reporting. However, USPS is included in the governmentwide financial statements and will be subject to reporting deferred maintenance under SFFAS No. 6 if reported amounts prove material to the governmentwide statements. For fiscal year 1996, USPS reported general PP&E of $17.9 billion. Of this amount, $8.3 billion is structures, facilities, and leasehold improvements; $5.9 billion is equipment; $2.1 billion is land; and $1.6 billion is general construction in progress. USPS holds approximately 1.9 percent of the federal government’s reported PP&E. For fiscal year 1996, USPS received an unqualified opinion on its financial statements from an IPA. The IPA found that the financial statements present fairly, in all material respects, the financial position of USPS. The USPS CFO stated that the agency does not have any deferred maintenance and, therefore, will not need to disclose a figure for purposes of the governmentwide financial statements. The USPS CFO indicated that he has not been contacted by officials from Treasury with regard to deferred maintenance reporting requirements. The CFO stated that USPS has standard maintenance plans provided in agency handbooks and policies for its primary types of assets, such as buildings, vehicles, and equipment. These plans or schedules include detailed records of maintenance required for each building, vehicle, and piece of equipment in operation. However, the CFO stated that the agency does not own airplanes or ships to move the mail; thus for these activities, required maintenance is performed by the contractors as stipulated in the contracts. For example, the CFO stated that USPS leases planes from a contractor for its overnight and priority mail. The contractor is motivated to keep its planes regularly maintained because USPS can impose fines for late or nondelivery of mail caused by not properly maintaining such equipment. The USPS has extensive experience with reporting maintenance on its buildings, vehicles, and equipment. USPS buildings, vehicles, and equipment must be regularly maintained so that the mail service operates promptly and smoothly. The USPS CFO did not report any challenges to implementing the deferred maintenance requirements because he does not believe the agency has any deferred maintenance. The following are GAO’s comments on the Department of Transportation’s letter. 1. Discussed in the “Agency Comments and Our Evaluation” section. 2. We considered DOT’s suggestions and have modified the report as appropriate. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) the plans and progress of 11 agencies toward implementing the new deferred maintenance requirements of the Statement of Federal Financial Accounting Standards (SFFAS) No. 6; and (2) the official position of agency Chief Financial Officers (CFO) and Inspector Generals (IG) with respect to its implementation. GAO noted that: (1) agency officials at the 9 agencies specifically required to implement the standard for fiscal year (FY) 1998 told GAO that they intend to comply with the deferred maintenance requirements of SFFAS No. 6; (2) if effectively implemented, the new federal accounting requirements will improve information on the maintenance of federal assets; (3) accurate reporting of deferred maintenance is an important step toward more informed decision-making; (4) by improving the validity of information on maintenance, the disclosure of deferred maintenance has the potential to improve both the allocation of federal resources and, ultimately, the condition of federal assets; (5) the federal requirement to disclose deferred maintenance amounts presents agencies with a new challenge for which they must adequately prepare; (6) some initial steps have been taken, but significant work remains to be done for all agencies to effectively implement the deferred maintenance requirements promptly; (7) 4 of the cognizant IGs expressed confidence that their respective agencies would implement the deferred maintenance requirements and 5 expressed reservations or were reluctant to assess agency progress; (8) although most agencies do not have experience generating agencywide estimates of deferred maintenance because historically they have not been required to do so, all agencies reported that they have estimated maintenance for ad hoc and budgetary purposes; (9) a critical step in generating a deferred maintenance estimate is a complete and reliable inventory of property, plant and equipment (PP&E) on which to assess maintenance needs; (10) the results of the FY 1996 financial audits show that 4 agencies are hampered in their efforts to report deferred maintenance because they have been unable to fully report PP&E reliably; (11) the Department of Defense holds about 80 percent of the federal government's PP&E, and it faces significant issues to implement the deferred maintenance requirements; (12) even for agencies where the independent audits indicated no report modifications that pertained to PP&E, the deferred maintenance requirements present a significant challenge; (13) the flexibility in SFFAS No. 6 increases the need for agencies to develop departmental policies and guidance that are compatible with agency mission and organizational structure; and (14) adequate data collection and tracking systems will be necessary to gather and verify information on deferred maintenance amounts.
Under the United States Housing Act of 1937, as amended, Congress created the federal public housing program to provide decent and safe rental housing for eligible low-income families, the elderly, and persons with disabilities. HUD administers federal aid to local public housing agencies that manage housing for low-income residents at rents they can afford. More specifically, 3,150 public housing agencies manage approximately 1.2 million public housing units throughout the nation, of which approximately 1 million are occupied. Public housing comes in all sizes and types, from scattered single-family houses to high-rise apartments. Funding for public housing construction, renovation, or operation can come from a number of HUD programs, as well as other government and private sources. HUD’s Public Housing Capital Fund (Capital Fund) provides funds (distributed by formula) for activities such as redesign, reconstruction, improvement of accessibility, and replacement of obsolete utility systems. The fiscal year 2005 appropriation for the Capital Fund was about $2.4 billion. HUD’s Public Housing Operating Fund (Operating Fund) provides operating subsidies to housing agencies to help them meet operating and management expenses. The fiscal year 2005 appropriation for the Operating Fund was about $2.4 billion. In addition, between fiscal years 1993 and 2005, Congress appropriated $6.8 billion for the HOPE VI program, which HUD awarded to public housing agencies for planning, technical assistance, construction, rehabilitation, demolition, and housing choice voucher assistance. While most of the funds are intended for capital costs, a portion of the revitalization grants may be used for community and supportive services. In addition, public housing agencies use the HOPE VI revitalization grant to leverage additional funds from sources such as other HUD funds, state or local contributions, or public and private loans. In 2002, we reported that housing agencies expected to leverage—for every dollar received in HOPE VI revitalization grants awarded through fiscal year 2001—an additional $1.85 in funds from other sources. We also found that housing agencies that had received revitalization grants expected to leverage $295 million in additional funds for community and supportive services. In addition to leveraging funds from a variety of sources, housing agencies may use Low-Income Housing Tax Credits—which are federal tax credits for the acquisition, rehabilitation, or new construction of affordable rental housing—as well as Medicaid Home and Community-Based Services waivers, which allow flexibility in providing healthcare or long-term care services to Medicaid-eligible individuals outside of an institutional setting. Residents of public housing who are elderly or have disabilities may have more special needs, compared with other residents, due to their age and type of disability. According to a 2002 study by the Housing Research Foundation, elderly public housing residents are more likely to be “frail” or have disabilities, compared with other elderly persons not living in public housing. The researchers reported that more than one in five elderly public housing residents were classified as persons with disabilities, compared with only 13 percent of U.S. elderly persons. In addition, the report found that over 30 percent of elderly public housing residents have at least one functional problem, such as difficulty with cooking, seeing, and hearing, compared with just over 20 percent of all elderly persons. Some elderly persons or persons with disabilities may require assistance with the basic tasks of everyday life, such as eating, bathing, and dressing. In addition, the needs of the elderly or persons with disabilities result in a need for physical features in residences that adequately accommodate physical limitations. According to 2005 HUD data, 64 percent of the approximately 1 million occupied public housing units are occupied by at least one elderly person or a person with a disability, and 50 percent of all heads of public housing households are either elderly (31 percent) or non-elderly persons with disabilities (19 percent), as shown in figure 1. Residents who are elderly or have disabilities live in a variety of public housing settings, including developments that are occupied primarily by elderly residents or residents with disabilities as well as developments that are occupied primarily by families. According to 2005 HUD data, of approximately 500,000 public housing units that are occupied by a head of household who is elderly or has a disability, 47 percent are in developments that are occupied primarily by elderly persons or persons with disabilities, 40 percent are in developments that are occupied primarily by families (family housing developments), and 13 percent are in developments that include buildings that are occupied by families and buildings that are occupied by elderly persons and persons with disabilities (mixed developments). While HUD collects data for several elements describing the physical and social conditions that exist at its public housing developments, the data do not sufficiently establish whether a housing development is severely distressed. Based on survey responses from public housing directors— covering 66 housing developments with indications of potential distress and occupied primarily by the elderly or persons with disabilities—we found that 11 developments exhibited signs of severe physical distress; 12 had signs of severe social distress; and an additional 5 developments had signs of both severe physical and social distress. Although the remainder of the 66 developments had fewer signs of severe distress, the public housing directors we surveyed pointed out several conditions that adversely affected the quality of life for their tenants who are elderly or have disabilities. The factors they most frequently cited were (1) aging buildings and systems, including inadequate air-conditioning; (2) lack of accessibility for residents with disabilities; (3) small studio apartments; (4) tension between elderly residents and non-elderly residents with disabilities; (5) lack of supportive services; and (6) security and crime issues. As previously discussed, Congress expanded the statutory definition of “severely distressed public housing” in 2003 to include, among other factors, housing developments in severe distress because of a lack of sufficient appropriate transportation, supportive services, economic opportunity, schools, civic and religious institutions, and public services. However, HUD data do not indicate whether a development has these kinds of public and other supportive services. HUD collects, maintains, and analyzes data on public housing primarily through a database system and a management center. HUD uses the Public and Indian Housing Information Center (PIC) system—which was designed to facilitate Web-based exchange of data between public housing agencies and local HUD offices—to monitor the housing agencies, detect fraud, and analyze and provide information to Congress and other interested parties. PIC contains a detailed inventory of public housing units and tenant (household) information about occupants. For example, the PIC database maintains information on the number of developments and units, age of the development, extent to which apartment units are accessible for persons with disabilities, and tenant information such as the age, disability status, and income of families who participate in public housing programs. HUD’s Real Estate Assessment Center (REAC) monitors and evaluates the physical condition of public housing and other properties that receive financial assistance from HUD and also assesses their financial condition. For example, the Physical Assessment Subsystem within REAC maintains information about the physical condition of HUD properties, based on on-site physical inspections, which identifies housing developments that are physically deteriorated, have health and safety hazards, or deficiencies such as tripping hazards on sidewalks or parking lots, damaged fences or gates, blocked emergency exits, or inoperable smoke detectors inside apartments. Using the limited data that were available from HUD and other sources, we defined eight measures to indicate potential severe distress for developments: (1) REAC physical inspection results; (2) adjusted physical inspection results provided by the Urban Institute; (3) building age; (4) vacancy rate; (5) total household income by unit; (6) poverty rate for the census tract; (7) accessibility of units to persons with disabilities; and (8) whether developments applied for HOPE VI or were approved for demolition, disposition, or HOPE VI funding. As noted previously, we then developed an “index of distress” to score conditions at public housing developments. We found that 76 (2 percent) of the 3,537 housing developments mainly occupied by the elderly and non-elderly persons with disabilities showed indications of severe distress. In contrast, other developments were more likely to show indications of severe distress. We found that 958 (12 percent) of 7,932 family housing developments and 69 (15 percent) of 466 mixed housing developments showed indications of severe distress. In addition, some public housing directors we interviewed reported that family housing developments, near or adjacent to their developments occupied primarily by elderly residents and residents with disabilities, were more likely to be in worse condition or afflicted by neighborhood crime or illicit activities. According to HUD’s data, the following characteristics describe the 76 housing developments that were occupied by mostly elderly persons and non-elderly persons with disabilities: 21 had been approved for demolition, disposition, or HOPE VI 72 had a building that was more than 30 years old; 64 had few units (less than 5 percent) that met accessibility standards; 24 had a physical inspection score under 60 percent; 41 were in a census tract with a poverty rate greater than 35 percent; 26 had households with a total median income under $7,000. Responses to our survey of public housing directors indicated that some of the 76 public housing developments occupied primarily by elderly persons and non-elderly persons with disabilities were severely distressed and that, among those that were not, certain characteristics nevertheless adversely affected the quality of life for their residents. We received responses covering 66 of these 76 developments and found that 11 showed signs of severe physical distress, 12 had signs of severe social distress, and five others had signs of both physical and social distress. In developments where survey data indicated signs of severe distress, housing directors reported deterioration and obsolescence in key systems. However, housing directors described the condition of the physical structures at 34 developments as either, “not at all deteriorated” or “a little deteriorated” (see fig. 2). Indicators of severe social distress that the directors reported include inadequate supportive services, such as transportation, assistance with meals, and problems with crime. Even though not necessarily indicative of severe distress, a number of factors were reported by many public housing agency directors as adversely affecting living conditions for the elderly and persons with disabilities. Among the most frequently cited characteristics or conditions were aging buildings, lack of accessibility for residents with disabilities, small size of apartments, mixing elderly and non-elderly residents with disabilities, the lack of supportive services, and crime. To varying extents, the survey respondents also cited these factors as challenges in providing public housing (see fig. 3). Eleven surveyed housing agency directors mentioned that aging buildings posed maintenance and other challenges for their housing agencies—nearly all (96 percent) of the developments that we surveyed were more than 30 years old. Some buildings had deteriorating structures, as shown in figure 4. In addition, several public housing agency officials further noted during our site visits and in our survey that because of their age, the developments were “functionally obsolete.” That is, many of the design features were outdated and did not meet the needs of residents. For example, 11 of the survey responses cited lack of adequate air-conditioning as a condition that most adversely affected the elderly and persons with disabilities. The building manager at one development said that during the summer months some elderly tenants who have heart conditions face increased health risks because their apartments do not have air-conditioning. At another development, an antiquated steam system provided heating. The public housing agency official whom we spoke with said this contributed to exorbitant utility bills. In addition to outdated systems, housing agency officials also cited outdated building designs as affecting the quality of life. For example, we visited two high-rise buildings that were more than 30 years old and constructed with exterior walkways, which residents had to use to access their apartments. During the winter months residents were routinely exposed to extremely cold weather and snow (see fig. 5). In addition, one public housing agency official whom we spoke with said that high-rise buildings limit social interactions among elderly residents. Due to the age of the buildings, public housing agency directors who responded to our survey reported that most of the 66 developments were undergoing, or will need, demolition, replacement, renovation, or rehabilitation (see fig. 6). Of the 66 developments for which we received responses, 11 were or are going to be demolished or replaced; and 21 had building systems (such as air-conditioning and elevator systems) that were recently or currently are being renovated; while 28 developments will require renovation to building systems within 3 years, according to housing agency directors. Respondents most frequently indicated that plumbing and sewer systems, elevators, and exterior building doors required near-term replacement or renovation. Other systems or features that were cited nearly as frequently were site lighting, parking lots, and heating and hot water systems. (Because our survey targeted developments that were most likely to be distressed, these conditions may not be representative of public housing for the elderly and persons with disabilities in general.) Public housing agency directors reported that a lack of accessibility throughout their developments was one condition that most adversely affected the quality of life for the elderly and persons with disabilities. For example, directors reported that 13 developments had elevators that were not large enough to allow a person in a wheelchair to easily turn around (see fig. 7). Our survey results also found that some developments did not have entrance and lobby doorways wide enough to allow passage for a person in a wheelchair or power scooter. We visited one housing development that had hallways on the main floor that were too narrow for modern power scooters to pass one another. According to a public housing agency official from this development, narrow halls are a problem because about one-third of the residents at the housing development use power scooters. This development also had a wheelchair ramp at the building’s entrance that was too narrow for power scooter users to easily navigate, and we observed power scooter users making difficult three-point turns on the narrow ramp. Additionally, six developments we surveyed did not have ramps of any kind for persons using wheelchairs or power scooters. Moreover, according to our survey, 23 developments had entrance and lobby hallways without grab bars. According to professionals knowledgeable about the housing needs of the elderly and persons with disabilities, grab bars or hand-rails in hallways are important because they help prevent falls, which are potentially disabling or fatal events. Based on our survey responses, housing agency directors for 32 developments indicated less than 5 percent of their units were accessible. During our visit to one housing development, the building manager told us that none of the apartment units were accessible to persons with disabilities; therefore, prospective residents with special needs were referred to another building within the housing agency’s portfolio. Housing directors reported that small studio apartments adversely affected the quality of life at six developments for the elderly and persons with disabilities and represented a major challenge for five housing agencies. One of the building managers that we interviewed noted that elderly residents who live in studio apartments sometimes do not have enough room for a lifetime’s worth of possessions and often have difficulty finding space for other family members, such as grandchildren, for whom the residents may serve as primary caregivers. In response to our survey, 17 public housing agency directors reported that a mixed population of elderly residents and younger residents with disabilities represented a challenge at their developments. During our visits to housing developments, housing agency officials and building managers told us that the mixed resident population sometimes led to tension because residents from each group often lead different lifestyles. In addition, many of the elderly residents that we interviewed told us that younger residents were more likely to have late-night visitors, play loud music, and lead active lifestyles, while they preferred quieter activities. Resident leaders at one development we visited told us that some elderly residents did not spend time in the common areas because they feared younger residents. Another elderly resident told us that some younger residents in his development robbed and terrorized the older residents. Further, officials that we interviewed also said that younger residents with disabilities sometimes have mental health conditions the housing agencies were not equipped to address. More specifically, building managers and residents told us that residents with mental health disabilities often disturbed other residents if they did not take proper medication. We found that at 29 of the developments for which we received survey responses, elderly residents made “very frequent” or “somewhat frequent” complaints about younger residents with disabilities. Conversely, at 59 of the developments, younger residents with disabilities made complaints about elderly residents “a little” or “not at all.” Thirteen surveyed public housing agency directors mentioned that providing adequate supportive services was a challenge. Most of the developments we visited and surveyed had some on-site supportive services, which assist with activities of daily living and are intended to help the elderly and persons with disabilities remain independent and in their communities (see fig. 8). However the array of supportive services varied and often could not be characterized as meeting the needs of residents. According to a HUD report on housing needs for the elderly, residents’ needs for greater assistance, such as that offered by a nursing home, may increase as a result of inadequate supportive services. Many of the building managers and residents that we interviewed told us that residents who moved out of the public housing development often moved in with family or to a nursing home because the development lacked sufficient supportive services. According to data from one public housing agency director, of 21 residents who relocated from one of the public housing developments during the 2004 calendar year, 6 moved into a nursing home. Although 28 of the developments from which we received survey responses had some type of on-site medical or health services, these varied from development to development because not all of the developments with health services offered assistance with medication. According to professionals knowledgeable about the housing needs of the elderly and non-elderly persons with disabilities, having a nurse or healthcare professional at the development to help residents manage their medications is beneficial. The elderly and non-elderly persons with disabilities also often need assistance with housekeeping, personal care, and meals. One building manager at a development we visited told us that the residents without nearby family often needed assistance with housekeeping. During one of our visits, we observed a resident receiving assistance with housekeeping. At another development, the housing agency officials told us that residents appreciated the services from an on-site hair salon. According to our analysis of our survey data, 34 developments offered on-site meal preparation services. One building manager at a development we visited told us that on-site lunch programs were often the only hot meal of the day for some residents. Building managers at other developments indicated that many of their residents can no longer safely cook. According to our analysis of survey responses, most of the developments offered recreational activities for the elderly or non-elderly persons with disabilities. Furthermore, residents we interviewed reported that recreational activities, such as outings, or organized potluck dinners, were important to their quality of life. One public housing agency official with whom we spoke said that many elderly residents do not have family nearby and without planned activities at the housing development many would never leave their apartments. According to one study on public housing for the elderly, up to a third of elderly residents living in public housing in New York almost never left their apartments. During our visits to 25 housing developments, we observed on-site activities such as arts and crafts workshops and sewing and computer classes. Many of the buildings also had libraries, television rooms, and exercise rooms. According to knowledgeable professionals, elderly residents need physical activities incorporated into their daily lives to maintain their health. At some developments we visited, residents said they had events such as bingo or pancake breakfasts, but lacked activities involving physical exercise. According to our survey responses, we also found that 25 housing developments offered job training or placement services for their residents. Public housing agency directors reported that in 55 of the developments some kind of scheduled or on-demand door-to-door transportation service was available. Door-to-door transportation includes vans or buses that pick up residents at the housing development and take them to destinations such as grocery stores, banks, or to medical appointments. However, survey responses from eight developments indicated that accessing any form of transportation was “not very easy,” nor were grocery stores or other services located near these developments, which increased the isolation of residents. Several of the residents at the housing developments that we visited said a lack of accessible transportation affected their quality of life because they could not easily get to a grocery store or doctors’ appointments. According to our survey results, 26 housing developments provided access to a service coordinator at least 20 days per month, while 19 had no service coordinator, and 11 had one available less than 5 days per month (see fig. 9). According to HUD, a service coordinator assists elderly residents and non-elderly residents with disabilities of federally assisted housing to obtain needed supportive services from community agencies, thereby preventing premature and inappropriate institutionalization. For example, a service coordinator might find a public housing resident with a disability someone to help with housekeeping, enabling the resident to remain independent. Service coordinators also help elderly residents and non-elderly residents with disabilities determine if they qualify for government services. According to the 2002 Housing Research Foundation Report cited above, 83 percent of elderly residents in public housing live alone, and therefore may not have a support network to help them access services or fill out paperwork. While service coordinators are an important aspect to improving the quality of life for the elderly and non-elderly persons with disabilities who reside in public housing, some developments provided access to service coordinators on a less frequent basis. For example, one housing agency we visited had one service coordinator for 2,500 units occupied by elderly persons and non-elderly persons with disabilities. According to the housing director, this staffing level was not sufficient to meet resident needs. In another case, two service coordinators were responsible for all of the housing agency’s 20,000 residents. Survey data indicated that 38 developments had at least some problems with crime in surrounding neighborhoods, while 24 developments had at least some problems with crime inside the development (see fig. 10). A few developments that we visited were adjacent to family public housing developments, which in general—according to our analysis of HUD data and interviews with housing agency directors—tend to be in worse condition than public housing occupied by the elderly and non-elderly persons with disabilities. Housing directors stated that, as a result, crime was more of a problem at those family-adjacent developments. Some elderly residents and non-elderly residents with disabilities told us that they did not feel safe in their neighborhoods or, sometimes, in their developments. At one housing development, one resident told us that young people from the neighborhood loitered in and around their development, which made the elderly residents feel uncomfortable. At two other housing developments we visited, public housing agency officials and residents identified tenants who sold drugs from their apartments, which attracted unwanted outsiders into the development. Residents at one development said they stopped participating in recreational activities because they feared someone would break into their apartments if they left. When problems with crime and vandalism peaked at another housing development, residents told us that they formed their own security group to monitor the activity at the building. According to officials whom we surveyed and interviewed, various strategies have been used to improve both physical and social conditions to better address the special needs of the elderly and non-elderly persons with disabilities. Methods to deal with physical distress included capital improvements such as renovating or modernizing buildings, systems, and units or, in extreme cases, demolishing or selling a development. Methods to reduce the level of social distress include a range of actions to address the needs of the elderly and non-elderly persons with disabilities, such as designating developments as “elderly only” for reasons of safety, converting developments into assisted living facilities, and working with other agencies, such as nonprofit and religious organizations, to provide in-home supportive services to residents. To improve physical conditions at public housing developments, 18 of the 43 responding public housing agency directors said they had ongoing or planned actions, such as modernizing building structures, upgrading accessibility features, and installing new building systems such as air conditioning and electrical systems. During our site visits, public housing agency officials whom we interviewed also described current or planned renovations to improve the physical conditions of their developments. For example, at one development the housing agency had recently improved its lobby and exterior with new paint, tiles, and landscaping. Building managers at this location told us that these renovations improved living conditions for residents and made the development more marketable. The housing agency also converted some of the first-floor units to be accessible to persons with disabilities and installed new appliances in the units. Other actions taken by housing agencies to improve physical conditions include planned or implemented elevator upgrades, which in some cases have made elevators more accessible to elderly residents or residents with disabilities. In addition, at one development we visited that had exterior walkways, the housing agency was undertaking large-scale renovations, which included enclosing the exposed areas to protect residents from inclement weather. At five developments we visited, public housing agencies had recently added central air-conditioning. Lastly, at three locations we visited, public housing agencies had previously converted, or planned to convert, studio apartments into one-bedroom units to better meet the needs of residents. Housing agency directors we interviewed during our site visits said that their housing agencies use public funding from federal, state, and local sources, and funding from private sources to address physical conditions. Public housing agency directors whom we surveyed made similar comments, with 17 citing HUD’s Capital Fund as a funding source to implement building modernizations or to renovate building components, including actions to accommodate the needs of persons with disabilities. The Capital Fund provides housing agencies with funds based on a formula that takes into account the size, location, and age of developments, along with the need for modernization, among several other characteristics. Public housing agency directors also reported using Low-Income Housing Tax Credits to make large-scale improvements or for new construction. Public housing agencies have also entered into partnerships with private-sector firms to implement a variety of improvements, such as building upgrades and comprehensive renovations. According to a housing agency official responsible for three large housing developments we visited, public housing agencies often lack development experience; thus, a partnership with private developers can bring valuable resources to improve public housing developments. Public housing agencies also undertook more comprehensive improvement programs to address difficulties at developments that are associated not only with physical deterioration, but also with the overall deterioration of the surrounding neighborhood. For example, in St. Petersburg, Florida, the housing agency received a $27 million HOPE VI grant in 1998, which it used to tear down and rebuild all housing at the Historic Village development and the accompanying family housing development, Jordan Park. The housing agency made physical improvements to the development and individual apartments, such as improving accessibility for persons with disabilities and adding air-conditioning. Before the redevelopment, Jordan Park had a high concentration of poverty and a reputation as being a haven for criminal activity. Building managers told us that the incidence of crime in the area has since gone down. The HOPE VI grant made up about 40 percent of the funding necessary for the $70 million improvements at Historic Village and Jordan Park. Low-Income Housing Tax Credits and a combination of state and local sources made up the rest of the funding. According to St. Petersburg housing agency officials, the large-scale improvements at Historic Village reduced vacancy rates and lowered the crime rate in the surrounding area, which is one of the goals of the HOPE VI program. However, at the Graham Park development, the housing agency in St. Petersburg determined that modifications necessary to improve accessibility were not feasible or cost effective because widening the narrow hallways would affect the structural integrity of the building. As a result, the housing agency submitted an application to sell Graham Park and use the proceeds to acquire or develop alternative affordable housing. Furthermore, the housing agency will offer current residents Section 8 housing vouchers so they can rent housing elsewhere. Some survey respondents also reported that they were planning to or were in the process of replacing some of their developments. For example, eight housing agency directors reported that they were considering or were implementing actions to demolish or dispose of existing developments in order to acquire or build new housing for the elderly and non-elderly persons with disabilities. Public housing agency officials we contacted mentioned a variety of strategies to improve social conditions at housing developments for the elderly and non-elderly persons with disabilities. For example, 28 housing agency directors who responded to our survey mentioned actions they have taken or plan to take to address social conditions for elderly persons and persons with disabilities who reside in public housing. For instance, 12 housing directors reported that they have taken actions to resolve problems associated with having elderly and non-elderly residents in the same development, such as designating their developments as “elderly only.” In particular, a number of housing directors cited safety concerns caused by young persons with mental health disabilities. Housing agency directors also reported that they have added security features and established programs to reduce crime and increase security. At one development for example, the housing agency partnered with the local police department to establish a community watch program. Thirteen survey respondents also reported taking other actions to address the needs of the elderly and persons with disabilities, including in-home health and nutrition assistance and other supportive services. In particular, one public housing director reported that the housing agency created its own senior resident advisor, who provides an array of supportive services to address the needs of its elderly residents. To improve social conditions on a larger scale, the housing agency in Allegheny County completely revitalized the Homestead Apartments outside of Pittsburgh, Pennsylvania. The housing agency built space on-site for two nonprofit elder care service providers in addition to remodeling the buildings. One provider met the needs of the frailest residents with complete nursing services, meals, and adult day care. The other provider operates a walk-in wellness center that provides Homestead’s more independent residents with blood pressure checks, assistance with medication, and service coordination and referrals. Housing officials whom we interviewed at Homestead estimated that the services provided at the adult day care center prevented nursing home-eligible residents from prematurely entering nursing homes. This resulted in a monetary savings for the state because, according to a Pennsylvania Department of Public Welfare director, the cost of care for those enrolled in the adult day center was only 85 percent of the cost of caring for them at a nursing home. Much of the new development at Homestead was financed with Low-Income Housing Tax Credits. In another large-scale effort, the Miami-Dade Housing Agency converted Helen Sawyer Plaza into an assisted living facility. Twenty-four hour nursing care, meals, and recreational activities are now provided on-site. According to the building manager, the conversion eliminated high vacancy rates at the development, created a sense of community among the residents, and prevented residents from prematurely entering nursing homes. The housing agency uses Medicaid Home and Community-Based Services waivers to obtain federal funding for the assisted-living care of residents at Helen Sawyer. Such Medicaid waivers offer states the flexibility to pay for nursing services delivered outside of institutional settings. In addition, officials we interviewed at Helen Sawyer asserted that conversions to assisted living facilities are cost-effective options, in part, because public housing agencies own the property on which the public housing is built. As a result, housing agencies do not have to assume the mortgage or lease payments that comparable private assisted living facilities often have. Based on our survey results and information from housing officials whom we interviewed, housing agencies partnered with outside agencies, such as community-based nonprofits or churches, to provide supportive services for the elderly and non-elderly persons with disabilities. In some cases, the agencies paid for the services; but in some cases, housing agencies also used federal grants. A building manager for one development that we visited said they partnered with a nearby church to provide a van to take residents shopping once a week. Local churches also provided food assistance to elderly residents and residents with disabilities who were not able to leave their apartments at this development. At another housing development we visited in Miami, Florida, Catholic Charities, a community-based organization, provided lunches on a daily basis to residents and assorted grocery items such as bread, fruit, and cereal on a weekly basis. We also observed a partnership in Seattle, Washington, where the housing agency partnered with a community-based organization to provide an on-site elderly community center where residents had access to meals, social activities, and assistance with filling prescriptions. Residents at this development also had access to an on-site health clinic. In addition, based on responses to our survey, five housing agency directors cited HUD’s Resident Opportunities and Self Sufficiency (ROSS) grant program as a means to provide supportive services such as assistance with health, activities of daily living, and transportation. Finally, public housing officials at two locations we visited also reported that ROSS grants funded door-to-door transportation for residents, assistance with housekeeping, and service coordinators, among other services. The extent to which public housing developments for the elderly and non-elderly persons with disabilities is severely distressed cannot be determined definitively with existing data, which are insufficient regarding factors that contribute to distress. Moreover, much of the data that are available are at the development level, rather than the individual building or unit level. These limited data, along with information from housing agency directors, suggest that severe distress in public housing developments primarily occupied by elderly residents and residents with disabilities was less prevalent than in developments occupied primarily by other types of residents. However, our work indicates that a number of developments primarily occupied by the elderly and non-elderly persons with disabilities are physically and/or socially distressed. Further, our site visits and survey of selected public housing directors indicate that, even in developments that may not be considered distressed, a number of physical and social factors can negatively affect the quality of life for public housing residents who are elderly or have disabilities. The directors’ agencies have implemented several strategies to address a variety of factors that contribute to problematic conditions for both elderly and non-elderly residents with disabilities, such as improving accessibility to persons with disabilities, addressing problems associated with mixing elderly and non-elderly disabled persons, and undertaking larger scale efforts to provide supportive services. Nevertheless, our work indicated that a significant number of the 66 developments covered by our survey will need replacement, renovation, or rehabilitation in the future and that the array of supportive services has often not met the needs of residents. These findings suggest that continued efforts will be needed to improve the quality of life for residents who are elderly, increasingly frail, or have disabilities. We provided a draft of this report to HUD for its review and comment. We received oral comments from officials in HUD’s Office of Public and Indian Housing indicating general agreement with the report. As a general comment, one official stated that the draft report underrates the adverse impact of the lack of accessibility of units for persons with disabilities. The official also noted that as elderly residents continue to age in place, their accessibility needs will increase. We did not attempt to determine a correlation between the extent of accessibility in public housing units and the percent of residents with disabilities. However, our report notes that public housing residents who are elderly or have disabilities may have more special needs, compared with other residents, due to their age and type of disability and that elderly public housing residents are more likely to be “frail” or to have disabilities, compared with other elderly persons. HUD also suggested that the report should contain additional discussion on how public housing agencies use HOPE VI funds to provide supportive services to the elderly. We did not insert additional information because in this report, as well as previous reports cited herein, we have provided information on the use of HOPE VI as a funding source for community and supportive services. Finally, one official expressed agreement with the public housing directors who, in responding to our survey, indicated that one method of reducing social distress is working with governmental and nonprofit organizations to provide supportive services. HUD also provided technical clarifications, which we incorporated as appropriate. We are sending copies of this report to the HUD Secretary and other interested congressional members and committees. We will make copies available to others upon request. In addition, this report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8678 or Woodd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of this report were to examine (1) the extent to which public housing developments occupied primarily by the elderly and non-elderly persons with disabilities were severely distressed and (2) the ways in which the stock of severely distressed public housing for the elderly and non-elderly persons with disabilities could be improved. We analyzed tenant and development characteristic data from the Department of Housing and Urban Development’s (HUD) Public and Indian Housing Information Center (PIC) database and physical inspection data from the Real Estate Assessment Center (REAC) database. We obtained data from HUD in January 2005 for both databases. For purposes of this report, we sought to use PIC data to describe the number of households headed by elderly persons or persons with disabilities and to identify developments occupied primarily by elderly persons or persons with disabilities that were potentially severely distressed. To assess the reliability of data from the PIC database, we reviewed relevant documentation, interviewed agency officials, including contractors who worked with these databases, and conducted electronic testing of the data, including frequency and distribution analyses. Our assessment showed that some tenant and development characteristic data for the 28 housing agencies that are Moving to Work (MTW) sites were outdated by as many as 6 years because, at the time of our data collection, HUD had not yet implemented a system that allowed PIC to accept MTW data. For the purposes of this report, we sought to identify developments that were potentially distressed; therefore, we determined these data to be sufficiently reliable for use in our first index. However, for the developments that we surveyed, we asked housing agencies to verify data for the six fields we used from PIC to identify developments that were potentially distressed. When we compared the updated data that were received through our survey to the data contained in PIC, we found that 39 of 62 developments had decreased vacancy rates, compared with the PIC data, while 8 had increased vacancy rates. In a few cases, we found that developments that had been demolished were reported in the PIC system as existing developments. Similarly, we found a few instances where developments had been approved for sale but remained in the PIC system as part of a public housing agency’s current housing portfolio. To assess the reliability of the data from the REAC database and the adjusted REAC data from the Urban Institute, we reviewed relevant documentation, interviewed knowledgeable officials, including contractors who worked with the database, and conducted electronic testing of the data, including frequency and distribution analyses. We determined the data to be sufficiently reliable to identify developments that were potentially distressed. However, we also asked housing agency directors to verify their physical inspection score that we obtained from REAC. We compared the updated data received through our survey with the data contained in REAC and found that in 6 of 62 cases, the two data points differed by more than 15 percent. A possible reason for these discrepancies is that REAC scores can be volatile based on the nature of the problems identified in the rating. For example, an updated REAC score that was markedly better than the previous one could have resulted from the remedying of easily fixable items. Had HUD possessed current PIC and REAC data on all developments, our first index may have identified some developments that were different from those identified in this report; this was the reason that we sought corroboration on these data through survey questions. We have noted these limitations in our report when appropriate. We focused our analysis on housing “developments” because much of the available data were at the development rather than the individual building or unit level. (A development can be a collection of buildings, located near each other or scattered geographically, or an individual building.) As a result, our analysis does not necessarily include all public housing units that are occupied by elderly persons or non-elderly persons with disabilities, because such units may be located in developments that are occupied primarily by other types of residents. To determine criteria for defining public housing as primarily occupied by elderly persons and non-elderly persons with disabilities, we consulted with officials from HUD and reviewed relevant studies. We decided to identify public housing developments as primarily those occupied by elderly persons or non-elderly persons with disabilities if they met the following criteria: There were at least 10 occupied units in the development; and 50 percent of head of households were elderly persons (aged 62 or older); or 50 percent of head of households were non-elderly persons with disabilities; or 80 percent of head of households were either elderly persons or non-elderly persons with disabilities. Based on our analysis of PIC data, we categorized public housing developments as either (1) developments occupied primarily by elderly persons or non-elderly persons with disabilities if they met the above criteria, (2) family developments if they did not meet the above conditions, and (3) developments that were mostly family housing but contained buildings with a concentration of elderly persons or persons with disabilities. To determine the criteria for a severely distressed development occupied primarily by elderly persons and non-elderly persons with disabilities, we interviewed HUD officials, knowledgeable individuals from social research organizations, and reviewed relevant laws and literature. To determine if HUD’s developments occupied by elderly persons or non-elderly persons with disabilities were severely distressed, we identified eight indicators of severe distress from the PIC and REAC systems and data from other sources. For each development we used (1) physical inspection score; (2) adjusted physical inspection score provided by the Urban Institute; (3) building age; (4) percent of units deemed accessible to persons with disabilities; (5) vacancy rate; (6) household income; (7) percent of population in census tract below poverty line; and (8) status of the development regarding application for HOPE VI funding or approval for demolition, disposition, or revitalization. For the “adjusted physical inspection score,” the Urban Institute edited HUD’s REAC physical inspection scores to avoid heavily penalizing developments for deficiencies that were easily correctable. For example, HUD deducts many points for inoperable smoke detectors, a serious but easily fixable problem. The Urban Institute deducted fewer points for these defects, so the “adjusted score” puts more weight on the soundness of the physical structures. Although we used the eight indicators to identify potentially severely distressed developments, these indicators had some limitations. For example, we used a high vacancy rate as one indicator of severe distress. However, in some instances, a development had a high vacancy rate because some of the units were being taken out of the available housing stock for purposes such as redesign, but still were categorized in HUD’s database as available. Moreover, we used the age of the building as an indicator of physical distress. However, in some cases, we found that housing developments recently had undergone renovation. In these cases, building age was not a good indicator of physical distress. For each development, we obtained data for each of the eight indicators of severe distress. We then examined the distributions of the data for each of the eight indicators, and scaled each indicator from 0 to 10. We then calculated a composite score for each development by computing an average for each development from their scores on the eight indicators. Based on the distribution of the composite scores, and judgment as to what constituted distress, we established a threshold score to indicate potential severe distress and potential moderate distress. We eliminated from the scoring developments that were missing data from three or more of the indicators. From our analysis, we found a total of 11,935 developments in the 50 states and the District of Columbia that had at least 10 occupied units and data available on at least six of the eight indicators of distress. We determined that 3,537 of these developments met our criteria as “primarily occupied by elderly persons or non-elderly persons with disabilities.” Of these 3,537 developments, we identified 76 developments (administered by 46 public housing agencies) that were potentially severely distressed. We conducted site visits to 25 of these developments, interviewed building managers, resident leaders, and local public housing agency officials, and observed the physical and social conditions at the sites. We selected housing agencies to visit based on factors such as diversity of size, geographic location, and number of potentially distressed developments. We then surveyed the 46 public housing agencies that manage the 76 potentially severely distressed developments to collect data describing their physical and social conditions. In developing the survey questions, we utilized our literature review on distressed public housing and the special needs of the elderly and non-elderly persons with disabilities, conducted interviews with representatives of advocacy organizations and professional associations interested in issues affecting the elderly and non-elderly persons with disabilities, and reviewed our field work conducted at several public housing developments. Through this research, we identified supportive services and housing features that are needs of the elderly and non-elderly persons with disabilities that reside in public housing and structured survey questions, accordingly. HUD staff located in the Office of Public and Indian Housing and the Office of Policy Programs and Legislative Initiatives reviewed the survey questionnaire and provided comments. Knowledgeable individuals from the National Association of Housing and Redevelopment Officials and the American Association of Service Coordinators also provided feedback on the survey. We pretested the survey with the directors of six housing agencies located in California, Connecticut, Hawaii, New Jersey, and Indiana. Lastly, four independent social scientists reviewed the survey for soundness. We mailed the survey (questionnaire) to each public housing agency on June 10, 2005. In the survey, we asked the local housing agency to verify, update, or correct the data we obtained from HUD on percent of units that were occupied by elderly persons or non-elderly persons with disabilities and data on five of our eight indicators of distress. Questions covered the following topics: physical deterioration, systems requiring renovation or modernization, the neighborhood environment in which the development was located, accessibility features, access to social and public services, and actions to remedy housing challenges (see www.gao.gov/cgi- bin/getrpt?GAO-06-205SP for a copy of the survey). Each questionnaire contained a set of specific questions about the identified development and a set of general questions about public housing for the elderly and non- elderly persons with disabilities. In the 11 cases where the housing agency managed more than one of the identified 76 developments, respondents were asked to provide separate answers—in response to the specific questions—for each of the identified developments. For the 35 public housing agencies with one development, we also asked the local housing agencies whether they had other developments or buildings occupied primarily by elderly persons or non-elderly persons with disabilities that did not score above our distress threshold, but had conditions comparable to or worse than the developments we identified. In a few cases, public housing agencies indicated that they did have other developments comparable or worse than the ones we identified. This indicates that the eight indicators we used to identify potentially distressed developments did not always capture cases of potential distress in developments occupied primarily by elderly persons or non-elderly persons with disabilities. Participants could return the questionnaire by mail or fax. To increase the response rate, we conducted three sets of follow-up telephone calls to offices that had not responded to our survey by the initial deadline. Collection of survey data ended on August 30, 2005. We had 43 housing agencies return the survey, providing a response rate of 93 percent, and representing 66 of the 76 developments. We did not attempt to verify the respondents’ answers against an independent source of information; however, we used two techniques to verify the reliability of questionnaire items. First, we used in-depth cognitive interviewing techniques to evaluate the answers of pretest participants. Interviewers judged that all the respondents’ answers to the questions were correct. Second, we compared some responses with observations made during site visits; again, observers concluded that responses to these items were correct. The practical difficulties of conducting any survey may introduce certain types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. Steps such as pretesting and follow-up contacts to increase response rates serve to minimize nonsampling errors. In addition, to further reduce errors, we performed statistical analyses to identify inconsistencies and used a second independent reviewer for the data analysis. We edited for consistency before the data were entered into an electronic database. All survey data were 100-percent verified, and a random sample of the surveys was further verified for completeness and accuracy. We analyzed responses to close-ended questions using statistical software. One analyst reviewed and categorized responses to open-ended questions, which was then independently verified by a second trained analyst. Because the developments selected for our survey were not based on a random sample, the results are not generalizable to all public housing for the elderly and non-elderly persons with disabilities. To identify the developments with the greatest indications of severe social or physical distress based on survey responses we developed “distress indexes.” See appendix II for more detail. To examine the ways in which the stock of severely distressed public housing for the elderly and non-elderly persons with disabilities could be improved, we reviewed relevant laws and regulations, and reports by federal agencies and research organizations. We also interviewed residents of public housing and public housing agency directors. We analyzed the interview responses and developed a summary of the most frequently reported strategies. Finally, we included questions in our survey to the public housing agency directors that operate the 76 developments that we identified as potentially severely distressed. We analyzed the responses from the survey and developed a summary of the most frequently reported strategies (see www.gao.gov/cgi-bin/getrpt?GAO-06-205SP for a copy of the survey and aggregated results). We conducted our work in Washington, D.C.; Miami and St. Petersburg, Florida; Homestead, New Castle, and Pittsburgh, Pennsylvania; Evansville, Indiana; St. Louis, Missouri; Seattle, Washington; and Oakland and San Francisco, California, between November 2004 and October 2005 in accordance with generally accepted government auditing standards. To identify the developments with the greatest indications of severe social or physical distress based on survey responses, we developed “distress indexes.” To create the indexes, we assigned points to individual survey questions based on their level of importance and impact on the quality of life for the elderly and non-elderly persons with disabilities. We used evidence from interviews with individuals knowledgeable of the housing needs of the elderly and non-elderly persons with disabilities to determine how to weight the questions. The nature of some topics, and thus the number of items about that topic, reflect the relative importance of that topic in determining distress. For example, we asked nine questions about which supportive services are available to residents, reflecting how significantly supportive services can affect conditions for residents of public housing. We assigned points to survey response items that indicated conditions of physical or social distress, giving higher points to responses that indicated more distress and no points to responses that indicated little distress. For example, one of the survey questions asked about the extent to which the physical structures at the development were deteriorated. We assigned 20 points to the physical distress index score if the respondent answered, “extremely deteriorated,” 15 points if the answer was “very deteriorated,” 10 points if “somewhat deteriorated,” 5 points if “a little deteriorated,” and no points if the answer was “not at all deteriorated.” We then summed the points for all questions for each development, which resulted in overall physical and social distress index scores. Each development could score up to 139 points on the physical distress index and up to 205 points on the social distress index. We analyzed the results for each of the 66 developments for which we had survey responses to determine the total scores for both physical and social distress. We determined that developments that had a score of 50 percent or more of the total points for either index had signs of severe physical or severe social distress. We were able to verify that a score of 50 percent or more indicated severe distress because we visited some of these developments and made detailed observations on their condition. See table 1 for the specific points assigned to each indicator of physical and social distress. We visited both the Homestead Apartments and Helen Sawyer Plaza developments and interviewed public housing agency officials and building managers. We also interviewed residents at the Homestead Apartments. The following describes in more detail the approaches used by the housing agencies to provide housing and services to its elderly residents at these developments. Allegheny County housing agency officials successfully renovated the Homestead Apartments near Pittsburgh, Pennsylvania, and made improvements to provide supportive services. The housing agency chose to renovate the Homestead Apartments because of the high concentration of elderly residents and because two of Homestead’s high-rise buildings were the oldest buildings in the housing agency’s portfolio. To renovate the apartments at Homestead, the housing agency partnered with a private real estate development firm that specialized in residential housing and community development. The interior portions of each of the development’s four high-rise buildings were replaced, and the housing agency added or built updated features. As part of the renovation, the housing agency converted 350 units into 240 apartments, with two- bedroom apartments and lounges added to every floor. Previously, the apartments were exceptionally small and had kitchen and bath configurations that would not accommodate persons with disabilities. Further, the housing agency reconfigured 5 percent of the units, and all of the laundry areas and lounges, to be accessible to persons with disabilities. To improve common areas of the development, the housing agency also installed large windows in the hallways to increase levels of natural light. A primary goal of the Homestead revitalization was to provide enhanced supportive services to elderly residents, in particular frail elderly residents. The housing agency in Allegheny County surveyed Homestead residents to determine how best to provide services and based on their responses, developed three categories according to the level of care they needed. The first group included the “most frail” residents, who had medical or functional frailties. The second group consisted of “at-risk” residents, who may have needed occasional services. The third group was made up of residents who were healthy and rarely used any of the available services. According to the health care staff at these facilities, residents typically progress through these three stages as they age. The housing agency then partnered with several non-HUD entities to improve services for the elderly and colocate an assisted living type of facility at the development. To help the most frail elderly residents, the housing agency partnered with a nonprofit organization—Community LIFE (Living Independently for Elders)—which offers complete nursing services, meals, and physical therapy (see fig. 13) to Homestead residents who are enrolled in the program. The LIFE Center also has a beauty salon that enrollees can use once a month for free. These services are modeled after the Program of All-Inclusive Care for the Elderly (PACE). For most participants, these comprehensive services permit them to continue living at home. Homestead residents represent about 40 percent of the LIFE Center’s enrollees. For at-risk residents, who do not require the level of care provided at the LIFE Center, the housing agency partnered with the University of Pittsburgh Medical Center (UPMC) to provide on-site services in the form of a walk-in wellness center. The services include blood pressure checks, glucose tests, assistance with medication, social service coordination, and space for visiting physicians. The UPMC facility also had two registered nurses on staff. All Homestead residents are free to take advantage of the services offered at the UPMC facility, which is colocated at the development and easily accessible to residents. UPMC also operates an exercise room in the Homestead Apartments, which has become popular among residents. According to the housing agency officials at Allegheny County, the renovation and colocation of supportive services were made possible by an innovative coordination of efforts and use of mixed financing. Approximately 67 percent of the funding for the Homestead renovation was based on Low-Income Housing Tax Credits. Under this program, states are authorized to allocate federal tax credits as an incentive to the private sector to develop rental housing for low-income households. After the state allocates tax credits to developers, the developers typically offer the credits to private investors, who use the tax credits to offset taxes otherwise owed on their tax returns. Other funding sources included state and local grants, a federal loan, and a $2.5 million dollar HOPE VI grant. According to the Allegheny County officials, the award of the HOPE VI grant helped to ensure potential investors that the project was viable. In addition, the LIFE Center was developed during renovation, thereby facilitating the colocation of this supportive service. To maintain the LIFE Center over the long-term, the housing agency was able to offer an attractive low-cost lease to Community LIFE because the agency already owned the land on which the facility was built. In addition, residents who are enrolled in the LIFE Center are eligible for Medicare or Medicaid, so Community LIFE receives payment through those programs. Proceeds from the LIFE Center’s lease with the housing agency are used to fund UPMC services. The Miami-Dade Housing Agency converted Helen Sawyer Plaza into an assisted living facility to enable elderly residents to “age in place” and avoid often costly institutional alternatives such as nursing homes. According to officials at Helen Sawyer, prior to conversion, the facility suffered from a high vacancy rate, and some of the building systems were outdated. Helen Sawyer residents now receive a variety of supportive services, which were made available as part of the assisted living conversion. For example, residents receive 24-hour nursing care and three hot meals per day in the dining room. A hairdresser and manicurist visit the development twice weekly. The development offers 30 hours of activities weekly, including aerobics, dancing, cultural events, and arts and crafts. Residents also have access to door-to-door transportation and a weekly shuttle for grocery shopping. Staff on-site offer or coordinate other supportive services such as podiatry, assistance with taking prescribed medications, and adult day care. An additional benefit of the conversion is that married residents can continue to live together in their apartments, even when one spouse requires assisted living care. To improve physical conditions at the eight-story Helen Sawyer Plaza, the building was modernized and apartments were made more spacious, which made the development more attractive to elderly residents. The development now has 104 apartments, including 83 studio apartments and 21 one-bedroom apartments. The studio units are 450 square feet while the one-bedroom units are 600 square feet. Security features at the development include perimeter iron fencing with card-access entry and individual emergency alarm systems for each apartment. Amenities now include a lobby, public restrooms, commercial kitchen, resident dining room, and community room. The housing agency also added grab bars throughout common areas and made improvements to more easily accommodate wheelchairs or motorized scooters. Helen Sawyer Plaza’s conversion into an assisted living facility was a multiphase process that required coordination among several organizations. For example, the housing agency contracted with a consultant who had expertise on assisted living facilities, obtained HUD modernization funding, and borrowed money to rehabilitate the building, obtained a license from the State of Florida to operate as an assisted living facility, and petitioned the Florida Department of Elderly Affairs for a Medicaid Home and Community-Based Services waiver. The waiver essentially allows the housing agency to receive money from the state to cover the cost of caring for residents at Helen Sawyer. The Miami-Dade officials also pointed out that funding from Medicaid waivers can be an incentive to convert a public housing development to an assisted living facility. For example, 65 Helen Sawyer Plaza residents receive Medicaid waivers that reimburse up to $28 per day for services. The Miami-Dade Housing Agency also coordinated with the city of Miami and Dade County to revitalize abandoned buildings in the neighborhood and offer transportation service at Helen Sawyer Plaza. In addition to the contact named above, Paul Schmidt, Assistant Director; Isidro Gomez; Robert Marek; Alison Martin; Marc Molino; Don Porteous; Linda Rego; Barbara Roesmann; and Michelle Zapata made key contributions to this report.
In 2003, Congress reauthorized HOPE VI, a program administered by the Department of Housing and Urban Development (HUD) and designed to improve the nation's worst public housing. In doing so, Congress required GAO to report on the extent of severely distressed public housing for the elderly and non-elderly persons with disabilities. "Severely distressed" is described in the statute as developments that, among other things, are a significant contributing factor to the physical decline of, and disinvestment in, the surrounding neighborhood; occupied predominantly by very low-income families, the unemployed, and those dependent on public assistance; have high rates of vandalism and criminal activity; and/or lack critical services, resulting in severe social distress. In response to this mandate, GAO examined (1) the extent to which public housing developments occupied primarily by elderly persons and non-elderly persons with disabilities are severely distressed and (2) the ways in which such housing can be improved. HUD officials provided oral comments indicating general agreement with the report. Available data on the physical and social conditions of public housing are insufficient to determine the extent to which developments occupied primarily by elderly persons and non-elderly persons with disabilities are severely distressed. Using HUD's data on public housing developments--buildings or groups of buildings--and their tenants, GAO identified 3,537 developments primarily occupied by elderly residents and persons with disabilities. Data from HUD and other sources indicated that 76 (2 percent) of these 3,537 developments were potentially severely distressed. To gather more information on the 76 developments that were potentially distressed, GAO surveyed public housing agency directors responsible for these developments. GAO received responses covering 66 of the 76 developments (the survey and aggregated results are available in GAO-06-205SP). These responses indicated the following: (1) eleven developments had signs of severe physical distress, such as deterioration of aging buildings and a lack of accessible features for persons with disabilities; (2) another twelve developments had signs of severe social distress, which included a lack of appropriate supportive services such as transportation or assistance with meals; and (3) an additional five developments had characteristics of both severe physical and social distress. Nevertheless, many of the directors GAO surveyed reported that numerous factors adversely affected the quality of life of elderly persons and non-elderly persons with disabilities residing in their developments. The factors cited most frequently were (1) aging buildings and systems, including inadequate air conditioning; (2) lack of accessibility for persons with disabilities; (3) small size of apartments; (4) the mixing of elderly and non-elderly residents; (5) inadequate supportive services; and (6) crime. To better address the special needs of the elderly and non-elderly persons with disabilities, public housing agency officials GAO surveyed or contacted have used various strategies to improve both physical and social conditions at their developments. Strategies to reduce physical distress include capital improvements such as renovating buildings, systems, and units or, in extreme cases, relocating residents and demolishing or selling a development. Methods to reduce the level of social distress include a range of actions, such as designating developments as "elderly only," converting developments into assisted living facilities, and working with other governmental agencies and nonprofit organizations to provide supportive services to residents.
In the mid-1990s, DOD became concerned that inadequate housing allowances and poor quality military housing were negatively affecting quality of life and readiness by contributing to servicemember decisions to leave military service. DOD noted that when living in private-sector housing in the local communities, servicemembers were paying about 19 percent of housing costs out of pocket, because housing allowances were inadequate. DOD also noted that the quality of military-owned housing had been in decline for more than 30 years because military-owned housing was not considered a priority and because earlier attempts at solutions ran into regulatory or legislative roadblocks. DOD officials stated that much of the military-owned family housing in the United States was old, lacked modern amenities, and required renovation or replacement. DOD estimated that completing this work with historical funding levels and traditional military construction methods would take more than 20 years and cost about $16 billion. In response, and with the approval of Congress, DOD began two major initiatives. First, DOD began an initiative to increase housing allowances to cover the average cost of housing and utilities in each of the nation’s various geographic areas, thus eliminating the average out-of-pocket housing costs paid by servicemembers. This initiative was completed at the beginning of calendar year 2005. Second, DOD began an initiative to privatize most military-owned housing to use private capital and construction expertise to replace or renovate inadequate housing faster than could be achieved using traditional funding methods at historical funding levels. At DOD’s request, Congress enacted legislation in 1996 authorizing the Military Housing Privatization Initiative to allow private- sector financing, ownership, operation, and maintenance of military housing. DOD policy states that private-sector housing in the communities near military installations will be relied upon as the primary source of family housing. However, when communities do not have an adequate amount of suitable housing, DOD intends to use housing privatization—rather than military-owned housing financed with military construction funds—as the primary means for meeting family housing requirements. As of December 2005, the services had awarded 52 projects to privatize over 112,000 family housing units and had plans to award 57 more projects to privatize over 76,000 more units by 2010. Table 1 shows implementation status by service. Also, appendix II contains more detailed status information on the 12 projects at the installations we visited during this review. The duration of the initial development period—that is, the period when developers construct new housing units and renovate older units—varies among privatization projects, often lasting from 5 to 10 years. Thus, planned housing improvements resulting from privatization normally are not completed for several years after the projects are awarded. For all awarded projects as of September 2005, privatization developers had completed the construction of 10,911 new housing units and the renovation of 9,161 older housing units. Figures 1 through 5 show photographs of newly constructed and older privatized housing units at selected installations we visited. Servicemembers can choose whether or not to live in privatized housing— there are no mandatory assignments. Those who choose to live in privatized housing receive the same housing allowance (which is used to pay rent and utilities) as they would if they rented or purchased housing in the local communities. Within the Office of the Secretary of Defense (OSD), the Housing and Competitive Sourcing Office, which reports to the Deputy Under Secretary of Defense (Installations and Environment), provides oversight of the housing privatization program, but the primary responsibility for implementing it rests with the individual services. OSD designed and uses the program evaluation plan report to oversee the effectiveness of the program and the performance of awarded projects. The report, prepared semiannually for the periods ending June 30 and December 31, is a compilation of extensive data submitted by the services for each awarded project and includes information on project contract structure, construction and renovation progress, occupancy, financial performance, and servicemember satisfaction with the housing. This report is a continuation of a series of reports that we have issued on matters related to DOD’s housing privatization program as well as DOD’s process for determining housing requirements. The following summarizes key issues from these reports: In July 1998, we reported on several concerns as the housing privatization program began, including (1) whether privatization would produce insignificant cost savings and whether the long contract terms of many projects might cause the building of housing that will not be needed in the future; (2) whether controls were adequate to protect the government’s interests if developers failed to operate and maintain the housing as expected; and (3) whether DOD would face certain problems if privatized housing units were not fully used by military members and were subsequently rented to civilians, as the contracts permit. In March 2000, we reported that initial implementation progress for the privatization program was slow, the services’ life-cycle cost analyses provided inaccurate cost comparisons, and DOD lacked a plan for evaluating the effectiveness of the program. In June 2002, we reported that DOD needed to (1) revise its housing requirements determination process to take into account greater use of community housing as well as the projected impact that the housing allowance initiative might have on military installation housing requirements; and (2) improve the value of the primary privatization oversight report by completing the report on time, including information on funds accumulated in project reinvestment accounts, and obtaining periodic independent verification of key report elements. In May 2004, we reported that DOD needed to improve its revised housing requirements determination process to help ensure that housing investments, whether through military construction or privatization, were supported by consistent and reliable needs assessments. We also reported that DOD needed to survey servicemembers with dependents to update information on the housing preferences for family housing, given recent changes such as the increase in housing allowances. In response to each report, DOD officials have stated that they planned management actions to address our concerns. Although OSD and the services have implemented program oversight policies and procedures to monitor the execution and performance of privatized housing projects, opportunities exist for improvement. Though owned and managed by the private sector, DOD maintains a strong interest in the operational and financial performance of privatized housing projects because it is accountable for public funds expended and because, according to DOD officials, the military’s housing objectives can be met only if the projects remain viable. Thus, adequate program oversight is essential to help monitor and safeguard the government’s interests and to help ensure the long-term success of the program. However, we identified three areas of concern—the adequacy of the Navy’s oversight methods, the usefulness of DOD’s primary oversight report, and the consistency of tenant satisfaction data—which provide opportunities for enhancing the oversight of awarded privatization projects. Specifically, as evidenced by issues identified in some Navy and Marine Corps projects we visited, the Navy’s oversight methods are not adequate to identify some project operational concerns or to ensure accurate reporting of project information. As a result, in contrast to the Army and the Air Force which have more robust oversight methods, there is less assurance that Navy management could become aware of project performance issues in a timely manner in order to plan needed actions to mitigate the concerns. Also, the usefulness of OSD’s primary program oversight tool—the semiannual privatization program evaluation report—has been limited because the report has not focused on key project performance metrics, has not been issued in a timely manner, and has included inaccuracies. Moreover, data on servicemember satisfaction with housing are inconsistent because DOD has not issued guidance to the services for collecting and reporting satisfaction information. As a result, data gathered to date cannot be readily tracked over time or compared among the services, and their value could be improved as a tool to more fully assess the impact of the privatization program, as well as the impact of DOD’s overall housing program, on servicemember quality of life. The Navy’s oversight program for monitoring Navy and Marine Corps projects has not adequately identified and addressed some project operational concerns, nor does it ensure accuracy in project information reported to DOD headquarters. Adequate program oversight is essential to help monitor and safeguard the government’s interests and to help ensure the long-term success of the program. However, in contrast to the Army’s and Air Force’s oversight programs, the Navy’s oversight program was less comprehensive and thus provided less assurance that Navy management would become aware of project performance issues in a timely manner. To illustrate, we found that the Army and the Air Force have robust, well- developed portfolio oversight programs to help top management monitor implementation of their privatization programs. Both of these services collected and analyzed detailed performance information on each project including construction progress, construction costs, occupancy levels, rental revenues, operating expenses, net operating income, and the debt coverage ratio. These services prepared detailed project reports, which compared actual project performance data with expectations and discussed reasons for significant variances. The Army and the Air Force also prepared quarterly portfolio summary reports, which monitored project execution, analyzed trends, highlighted current and potential performance issues, and documented recent and planned actions to address any project concerns. In contrast, the Navy’s oversight program was less structured, included fewer details on project performance, and did not include summary oversight reports on portfolio performance, even though such reports were required by Navy guidance. Specifically, in February 2004, the Navy established a portfolio management group and assigned the group responsibility to oversee the Navy’s and Marine Corps’ housing privatization program. Although the group’s charter stated that it would review project performance information and prepare consolidated portfolio summary reports, Navy officials stated that no such reports had been prepared at the time of our review in January 2006, almost 2 years after the charter was approved. Navy officials initially told us that the required summary reports were not needed because portfolio monitoring was performed in other ways, such as a review of monthly status reports from each project. They further stated that the Navy intended to eliminate the reporting requirement. Subsequently, Navy officials told us that the summary performance reports were needed and would be prepared in the future. During our visits to Navy and Marine Corps privatization projects, we found instances where Navy oversight had not been adequate to identify and address some project operational issues and ensure accurate reporting of project performance information to OSD. For example: During our September 2005 visit to the Navy’s Kingsville II project at the Naval Air Station Kingsville, Texas, we found that project funds had not been disbursed in accordance with the project agreement. According to the agreement, 30 percent of the project’s net cash flow—that is, the rental revenue remaining after payment of expenses and debt service— was to be deposited to a Navy-owned reserve account to be available for future project needs. On the basis of the project’s net cash flow during the first and second quarters of 2005, over $42,000 should have been deposited to the Navy’s account. Yet, only $314 was deposited. When we asked about this, Navy officials initially told us that the deposit amount was correct and consistent with original expectations. When we again questioned the deposit amount, Navy officials stated that the funds had not been appropriately disbursed and that they had asked the project developer for a complete analysis of the reserve accounts from project inception. The officials subsequently stated that a deposit was made to correct the balance in the Navy reserve account. Navy officials also stated that, in light of the shortcomings identified, the project agreement would be amended to require deposit and disbursement reports for all reserve accounts and to ensure that the project’s annual audit included a compliance review. During our visit to the Navy’s South Texas project in September 2005, we found that the project had not reimbursed the Navy for police and fire protection services, as specified in a memorandum signed by the Navy in January 2002. The memorandum stated that the project would pay the Navy for police and fire protection services provided by the Naval Air Station Corpus Christi beginning in calendar year 2002. The initial annual payment was to be $84,756 with cost-of-living adjustments in future years. However, when we asked about the payments in September 2005, we were told that no payments had been made because the Navy had not processed the proper paperwork to bill the project for reimbursement. When we again asked about the reimbursement status in December 2005, Navy officials stated that they were working to resolve the issue. As of January 2006, 4 years after the project memorandum was initially signed, the Navy still had not billed the project for reimbursement. We found that inaccurate project status information was reported to OSD for five of the eight Navy and Marine Corps projects we reviewed in detail. For example, data reported to OSD on the San Diego II project showed that the project’s total development cost was $304 million, although the correct amount was $427 million. Also, data reported to OSD for the Camp Pendleton I showed that the project’s reinvestment account balance was $725,000 although the correct balance was $104,000. Further, data reported to OSD for the Marine Corps’ Tri- Command project showed that no net operating funds or interest would be used to help finance the project during the initial development period even though project closing documents in March 2003 showed that $53.6 million from net operating funds and interest were expected to be used to help finance the project. Navy officials stated that corrections would be made in the information reported to OSD. During our review, Navy officials stated that they had begun a top-to- bottom evaluation of the privatization oversight program. They stated that our review had been helpful in identifying items that required attention, such as those we mentioned. The officials stated that while they believed that their current procedures protected the government’s interests and alerted top management to project concerns, they were conducting a comprehensive review to ensure consistency and completeness, upgrade the monitoring and oversight process, and make oversight responsibilities better defined and, perhaps, more aggressive. As part of the review, the officials stated that they intended to consider the Army’s and the Air Force’s oversight procedures and reports and also intended to ensure that appropriate portfolio performance summary reporting was completed in a timely fashion. The officials said that they planned to complete the review and implement oversight improvements by late spring 2006. OSD’s semiannual privatization program evaluation report is of limited usefulness because it is unwieldy, untimely, and includes inaccurate information on some Navy and Marine Corps projects. Established in January 2001, the report is OSD’s primary tool for overseeing the program’s effectiveness and the performance of awarded projects. Although the report is a potentially useful tool for monitoring program implementation, the value of the report has been limited for several reasons. First, as the number of awarded projects has increased from 7, when the report was established, to 52 at the end of December 2005, the report is not well focused, and has become unwieldy with the growing volume of data provided. The December 2004 report contained 268 pages and, unless changed, the report size will continue to increase as additional projects are awarded. A streamlined report that focuses on a few key performance metrics from each project could more readily highlight any operational or financial concerns that might require management attention. Both the Army and the Air Force portfolio summary reports include such focused information and thus might provide useful insight in restructuring the OSD report. Second, the report’s usefulness has been limited because the report is not timely. Although the report is not intended to provide for real-time monitoring of awarded projects—the individual services have this responsibility—information included in the report is so dated by the time the report is issued that its value, as a tool to highlight any operational or financial concerns to top management in a timely manner, is questionable. For example, the report containing project information as of December 31, 2004, was due March 15, 2005, but it was not issued until June 2005, 3 months late, and contained data that were about 6 months old. Similarly, the report containing project information as of June 30, 2005, was due by September 15, 2005, but was not issued until February 2006, almost 5 months late, making the information in it more than 7 months old. Third, the reports include inaccuracies because data reported by the services are sometimes incorrect. OSD officials stated that, although they review data submitted by the services for consistency and accuracy compared to other information provided to OSD, reported information has not been subjected to periodic independent verification to check accuracy. We previously noted similar concerns about the privatization program evaluation report. In our June 2002 report, we recommended that DOD improve the report’s value by completing the report on time, including information on funds accumulated in project reinvestment accounts, and obtaining periodic independent verification of key report elements. Although the report now includes information on funds accumulated in project reinvestment accounts, concerns remain about the report’s timeliness and accuracy. These concerns may be of additional importance given that the House Appropriations Committee requested in 2005 that DOD begin submitting a summary of the results of the program evaluation plan used to monitor the military housing privatization initiative to the committee and that information from the report has been cited in DOD testimony on the housing privatization program. The services have adopted different methods and time frames for collecting and analyzing information about servicemember satisfaction with privatized housing, largely because OSD has not issued guidance on how or when the data must be collected. This limits the data’s value for tracking occupant satisfaction over time as well as making service-to-service comparisons. Given that the overall goal of the housing privatization program is to improve the quality of life for servicemembers by improving the condition of military housing, DOD considers that one measure of program success is whether or not servicemembers are satisfied with privatized housing. To gauge servicemember satisfaction, OSD requires the services to collect and report satisfaction information from occupants at each awarded project as part of the input to the privatization program evaluation report. Specifically, OSD requires the services to survey occupants and to report the occupants’ responses to the question “Would you recommend privatized housing?” Data are reported separately for occupants of privatized housing that is newly constructed, newly renovated, and not renovated. Similar satisfaction information is not routinely collected from the majority of servicemembers who live in the communities surrounding military installations. The information required by OSD could be useful in assessing satisfaction levels over time and for comparing satisfaction levels among projects and the services to identify trends and factors attributing to higher or lower satisfaction levels. However, using satisfaction data for these purposes requires that the services collect consistent information, and this is not the case. Largely because OSD has not provided guidance on how or when the services should collect servicemember satisfaction data, the services have adopted different methods and time frames for collecting and analyzing satisfaction information. The Army uses a contractor to survey privatized housing occupants annually between April and July. The 2005 survey asked 72 questions on various aspects of maintenance and property management services, unit condition, and amenities. Responses to most questions were requested using a 5-point scale—for example, where “1” represents very dissatisfied or no agreement and “5” represents very satisfied or extreme agreement. Prior to 2005, the Army’s survey requested most responses on a 7-point scale. Army officials stated that the change was made to be more compatible with surveys performed by the other services. However, the requested response to the “Would you recommend privatized housing?” question was “yes” or “no”, rather than a requested response on a 5-point scale. Therefore, because the Navy and the Air Force request that servicemembers respond to this question using a 5-point scale, the Army did not achieve compatibility with the other services in the responses to this key question. The Navy uses a different contractor to survey privatized housing occupants at various times during the year. The survey asks 48 questions with responses requested on a 5-point scale, including the question on whether the occupant would recommend privatized housing. Navy officials stated that the Navy strives to survey each project once a year. However, surveys were not conducted at some Navy and Marine Corps privatized housing projects in 2004 or 2005, and for six projects, the Navy reported no satisfaction information to OSD for inclusion in the December 2004 privatization program evaluation report. Air Force officials stated that until 2005 each privatized project conducted a local survey of occupants. However, due to disparities in the ways the survey was administered from one installation to another and because of the difficulty in achieving statistically significant response rates (for example, only nine responses were obtained from 382 tenants at the Patrick Air Force Base project in 2004), the Air Force decided to adopt a centralized approach. In June 2005, the Air Force used the same contractor as the Navy and surveyed occupants at all Air Force privatized projects. The survey asked 54 questions—mostly the same questions that the Navy asked—with responses requested on a 5- point scale, including the question on whether the occupant would recommend privatized housing. With different survey methods, questions, and time frames, the information being collected cannot be readily used for the purposes of benchmarking, tracking, or comparing servicemember satisfaction levels. Thus, the value of the information to help measure whether or not the privatization program is succeeding in its goal of improving servicemember quality of life could be improved. Further, because housing satisfaction information is not routinely collected on servicemembers who do not live in privatized housing, DOD lacks complete information on the impact of its overall housing program on servicemember quality of life. Sixteen projects, or 36 percent, of 44 awarded privatization projects had occupancy rates below expectations with rates below 90 percent, as of September 30, 2005, raising concerns about project performance. Although the projects were justified on the basis of meeting military family housing needs, 20 projects have begun renting housing units to parties other than military families, including unaccompanied military personnel and the general public, in an attempt to keep rental revenues up. Still, rental revenues in some of the projects we visited have not met expectations, resulting in signs of financial stress such as having months where project revenues were insufficient to pay all project expenses. In the long term, if lower than expected occupancy and rental revenues persist, the result could be significantly reduced funds deposited into reserve accounts, which provide for future project needs and renovations. Or, in the worst case, there could be project financial failures. Factors contributing to occupancy challenges include poor condition of existing housing that has not yet been renovated in some projects, significantly increased housing allowances, which have made it possible for more military families to afford off-base housing thus reducing the need for privatized housing, and continued problems in DOD’s housing requirements determination process, which could result in overstating the need for privatized housing. Although deployments can also contribute to occupancy challenges, they were cited as a contributing factor to lower than expected occupancy rates in only 1 of the 12 projects we reviewed. The services are monitoring occupancy and revenue concerns, and in some cases, have taken or planned steps to address the concerns. However, without additional steps to help ensure that the size of future privatization projects is reliably determined, future projects could face similar occupancy and financial challenges. We found that some awarded projects, as shown in table 2, were not meeting occupancy expectations. According to service officials, the expected occupancy rate during a project’s initial development period, when many housing units are being constructed or undergoing renovation, is usually around 90 percent of the units available for rent. After completion of the initial development period, most projects expect occupancy rates of about 95 percent. As of September 30, 2005, occupancy was below expectations and below 90 percent in 6 of the Army’s 19 awarded projects, 4 of the Navy’s and Marine Corps’ 13 awarded projects, and 6 of the Air Force’s 12 awarded projects. Although most of these projects were in their initial development periods, 1 Navy and 2 Air Force projects were not. In total, of 85,590 privatized housing units available for rent, 77,355 units or 90 percent were occupied and 8,235 units or 10 percent were vacant. Occupancy rates would have been lower if 20 projects had not rented units to nontarget tenants—that is, tenants other than military families. Although the projects were justified on the basis of meeting the needs of military families, project managers are allowed to offer units for rent to nontarget tenants, when occupancy rates fall below expected levels for a designated period of time, such as 2 or 3 months. Normally, project managers follow a priority list, referred to as a tenant waterfall, when renting units to nontarget tenants. In a typical tenant waterfall, vacant family housing units are first offered to single or unaccompanied active duty military servicemembers; then to DOD-related individuals, such as retired military personnel and civilians and contractors who work for DOD; and finally to civilians in the general public. As of September 30, 2005, of 44 awarded projects, 20 projects, or 45 percent, had rented units to individuals other than military families. More specifically, 20 projects had rented 1,116 units to single or unaccompanied military personnel; 662 units to retired military personnel and civilians and contractors who work for DOD; and 299 units to civilians from the general public. In all, 2,077 family housing units were occupied by parties other than military families. Although renting vacant units to nontarget tenants increases rental revenue, the practice includes some associated concerns. For example, although background checks are performed on prospective general public civilian tenants, several service officials stated that additional concerns exist when civilians live on military installations, such as whether they should have access to on-base amenities available to military families. Also, when units are rented to unaccompanied servicemembers, the rental revenue is usually less than with military family occupants because the rental rate is normally based on housing allowance rates, and the allowance rates for unaccompanied servicemembers are less than the rates for servicemembers with families. Therefore, although occupancy rates increase, the increase in rental revenues usually falls short of the revenue expectations for the units. When project occupancy levels are less than expected, project rental revenues are less than expected, which can cause financial stress, such as having periods when revenues are insufficient to pay all expenses. If revenue shortfalls persist in the long term, the result can be reduced or no funds remaining after payment of operational expenses, debt service, and developer returns to be deposited into the reserve accounts established to pay for future project needs and renovations. In a worst—case scenario, there could be insufficient funds to make a project’s loan payments, which could lead to a financial default. Although the housing privatization program is relatively young and the majority of the projects awarded through September 2005 appeared financially healthy, lower than expected occupancy rates and rental revenues in some projects were causing financial stress in some of the projects we visited. The examples below illustrate the occupancy and financial challenges facing some projects, the reasons for the challenges, and steps taken or planned in response. While many vacancies involved older housing units not yet renovated, we also found vacancies involving newly constructed and renovated units. At Fort Meade in July 2005, 2,044 units, or 81 percent, of the available units were occupied, compared to an expected occupancy of 2,332 units, or 92 percent. Army officials stated that the project’s 491 vacant units were older units that had not been renovated. Occupancy would have been lower if the project had not rented units to nontarget parties. Of the occupied units, 205 units, or 10 percent, were occupied by nontarget tenants, including unaccompanied military servicemembers, military retirees, and DOD civilian employees. The shortfall of 288 expected occupants had caused financial stress for the project. For example, the project’s net operating income was 33 percent below expectations for the quarter ending June 30, 2005. Army officials stated that lower than expected revenues had slowed the project’s construction progress because funds remaining after payment of project expenses were to be used to help pay for construction costs during the initial development period. The officials stated that lower than expected occupancy was caused by three main factors. First, the poor condition of much of the privatized housing that had not yet been improved made it unattractive to military families. Second, increased housing allowances made more local community housing affordable and caused many military families to decide to rent or buy housing off base. Third, recent private-sector housing development in the local communities surrounding Fort Meade increased the availability of local housing. In response to the occupancy and financial concerns, Army officials stated that plans were underway to restructure the project and reduce the project’s planned number of units. Army officials were optimistic that occupancy would increase as more units were renovated and additional new units were constructed, making the project more appealing to military families. At the Navy’s South Texas project, Navy officials stated that lower than expected occupancy had been a concern since the project’s beginning in February 2002. At the time of our visit in September 2005, the occupancy rate was 78 percent, with 311 units occupied and 87 units vacant. Navy officials stated that a key reason for low occupancy was that the project was still in its initial development period, and progress in improving housing conditions had proceeded much more slowly than planned. As a result, much of the privatized housing was in poor condition and unattractive to military families. However, of the 87 vacant units, Navy officials stated that only 11 were awaiting renovation or replacement, and the remaining 76 units consisted of newly constructed or renovated housing units. Other causes for low occupancy included reduced housing requirements caused by reductions in military personnel assigned to the area and increased housing allowances, which made more local community housing affordable for servicemembers. With reduced occupancy, the project had experienced signs of financial stress. For example, in July 2005, the project’s rental income was 26 percent below budget and was insufficient to pay the project’s operating expenses. Also, the project’s debt coverage ratio was a negative number, meaning that net operating income was insufficient to cover the project’s debt payment. Navy officials stated that the project faced little risk of financial failure during its initial development period because accounts were established at the project’s inception to provide for debt service payments during this period. Nevertheless, Navy officials expressed concern about the project and had taken actions to address the situation. In August 2004, an agreement was reached to reduce the project’s scope by 80 units, and Navy officials stated that further project scope reductions might be considered in the future. At Robins Air Force Base, 559 units, or 83 percent, of 670 available units were occupied in September 2005 compared to the expected occupancy rate of 97 percent. This project had completed its initial development period and consequently all available units are newly constructed or renovated. Of the occupied units, 109 units were occupied by nontarget tenants, including 42 civilians. Air Force officials stated two reasons for the low occupancy. First, increased housing allowances and attractive mortgage interest rates had caused some servicemembers to decide to purchase homes in the local community. Second, the project’s design, which included many two-bedroom units, was less appealing to some military families. As a result of the low occupancy rates, Air Force officials stated that the project faced significant financial challenges. The Robins project was one of three Air Force projects rated as unsatisfactory in the Air Force’s September 2005 portfolio summary report because of financial weakness and concerns about meeting developmental and/or financial obligations. Air Force officials stated that alternatives were being explored, which may require renegotiation of the project agreement with the developer to improve the project’s long-term financial viability. At Patrick Air Force Base, military families occupied 172, or only 29 percent, of the 592 available units. Nontarget tenants, including 135 unaccompanied servicemembers and 126 civilians, occupied 261 additional units to make the overall occupancy rate 73 percent compared to an expected occupancy rate of 90 percent. Air Force officials attributed the low occupancy to the poor condition of the project’s units, where planned improvements were far behind schedule. The project, which will consist of all new units when completed, had no new units ready for occupancy at the time of our visit in early December 2005. The officials also said that increased housing allowances had caused many military families to decide to obtain housing in the local community. Although the project’s nontarget tenants had significantly reduced the financial challenges that would have occurred if only military families occupied the housing, the project still faced financial stress. For the quarter ending September 30, 2005, the project’s net operating income was 28 percent below expectations. Largely because of financial issues, the project was restructured in April 2005 to increase debt and provide additional funds needed to complete the initial development period. As part of the restructuring, some funds that had initially been required to flow into the project’s reserve account for future project needs and renovation were allowed to be used for construction funding. Air Force officials stated their belief that, as housing improvements are completed, both occupancy rates and the number of military family tenants will increase and the project’s financial performance will improve. At the time of our visit to the Marine Corps’ Tri-Command project in early October 2005, the expected occupancy rate was 93 percent. However, the actual rate was 83 percent, with 1,393 of 1,680 available units occupied and 287 units vacant. Service officials stated that most vacant units were older units that had not been renovated. According to installation officials, the lower than expected occupancy rate was caused by increased housing allowances, which had led some servicemembers to decide to rent or buy housing in the local community. Also, although the project was awarded in March 2003, the project was still undergoing initial development and, with many of the planned housing improvements not yet completed, much of the on-base housing was in poor condition and unattractive to military families. With lower than expected occupancy, the project showed signs of financial stress. In September 2005, the project reported that rental revenues were 14 percent below expectations and the net operating income was 30 percent below expectations. Also, the project’s debt coverage ratio was .66, meaning that the project’s operations did not produce sufficient funds to cover the debt payment. Marine Corps officials stated that the project faced little risk of financial failure during its initial development period because accounts were established at the project’s inception to provide for debt service payments during this period. Still, the officials expressed concern about the project’s finances. In an effort to improve occupancy and financial performance, the project revised its revitalization strategy in August 2005 and obtained $44.1 million in additional private loans to finance upgrades to more housing units than originally planned to make the units more appealing to potential renters. Marine Corps officials stated that the revised strategy should result in improved project performance. Increases in monthly housing allowances and unreliable estimates of housing requirements contribute to occupancy concerns in some privatization projects by reducing the need for privatized housing or possibly overstating the required size of some projects. Some causes of occupancy concerns, such as changes in personnel assignments and deployments, often cannot be predicted and are beyond the control of the services. While deployments can contribute to occupancy challenges, they were cited as a contributing factor to lower than expected occupancy rates in only 1 of the 12 projects we reviewed. Also, as the condition of privatized housing at some installations improves with the construction of new housing and the renovation of older housing units, the projects may attract more military families and the occupancy rates may improve. However, other factors, such as the impact of DOD’s zero-out-of-pocket housing allowance initiative and the reliability of DOD’s overall housing requirements assessment process, can also affect occupancy rates and are important considerations in planning for future housing privatization projects. To help ensure that the size of housing projects is accurately determined, we previously reported that DOD needed to study how increased allowances might affect future housing needs and to make improvements in its requirements process to maximize reliance on local community housing, as required by DOD policy. Yet, because DOD has yet to implement these recommendations, the planned size of future privatization projects may not be based on reliable needs assessments, which could contribute to occupancy and financial challenges in some future projects. For example, in June 2002, we noted that uncertainties existed in the future need for military-owned and privatized housing because of DOD’s initiative to increase housing allowances. Prior to the initiative, servicemembers with families living in community housing received, on average, an allowance that covered about 81 percent of housing costs, including utilities. Servicemembers paid the remaining 19 percent of housing costs out of pocket using other sources of income. Under the initiative begun in 2001, housing allowances increased each year over a 5-year period, progressively eliminating the average out-of-pocket costs. By January 2005, the average housing allowance fully covered the average costs of housing and utilities in each geographic area with the typical servicemember paying no additional out-of-pocket costs. Table 3 illustrates the increase in housing allowances for selected military paygrades in five locations before the initiative in 2000 and after the initiative in 2006. Our report further noted that increased housing allowances from the zero- out-of-pocket initiative would make a significant impact on the military housing program. First, increased allowances should decrease the requirement for military-owned or privatized houses by making local community housing more affordable to servicemembers. Second, over time, the supply of community housing available to military families could increase and reduce the requirement for military-owned or privatized housing as private developers construct new housing near military installations to profit from renting to servicemembers at market rates. Third, increased allowances should allow DOD to better satisfy the preferences of most servicemembers to live off base and reduce demand for on-base housing. For these reasons, we recommended that DOD take into account the projected impact that the housing allowance initiative might have on military housing requirements. Yet, as of January 2006, DOD had not conducted detailed analyses to consider the effects of increased allowances on requirements, nor had the department provided guidance to the services on how these effects should be considered in their housing requirements assessments. We also previously reported on changes needed to increase the reliability of DOD’s housing requirements determination process. In May 2004, we noted that, although DOD had revised its process and made improvements, additional steps were needed to ensure consistency, accuracy, and maximum reliance on private housing in the communities surrounding military installations. Specifically, we noted that (1) DOD had not provided the services with timely detailed guidance for implementing the revised requirements process; (2) in the absence of detailed guidance, the services used inconsistent methods and sometimes questionable data sources and assumptions when determining family housing needs at various installations; and (3) as a result, DOD could not know with assurance how many housing units it needed and whether its housing investment decisions were justified. The report also noted that DOD’s revised requirements process provided exceptions to the use of available, suitable local community housing at each installation. We noted that one exception— military mission requirements—appeared clearly justified, but the other exceptions did not and could result in the services identifying more on-base family housing requirements than were actually needed. For example, DOD’s process allows installations to include in its military-owned or privatized housing requirement a quantity of housing to accommodate up to 10 percent of the projected number of families at those installations, regardless of the availability of local housing. To address these matters, we recommended that DOD provide the military services with more detailed guidance on implementing the revised housing requirements process to help ensure that housing investments, whether through military construction or privatization, were supported by consistent and reliable needs assessments. We also recommended that DOD review the rationale supporting the exceptions to using local community housing in an effort to reduce or narrow the scope of the exceptions and help maximize use of available community housing. In response, DOD stated that it planned to include detailed guidance on implementing the requirements process in a forthcoming revision to the DOD housing management manual. DOD officials stated that the revised manual would also include guidance narrowing the scope of the exceptions provided to the services in the use of available community housing. Although the revised manual was originally scheduled for issuance in December 2004, the manual had not been issued at the time of our review in January 2006. DOD officials stated that they were still revising the manual and that the final version should be issued during 2006. The fact is that the scopes are currently based on static housing requirements and market analysis. Markets are not static, as is evidenced by the speed at which the private sector has provided housing thereby reducing subsequent requirements…The Air Force should carefully consider expectations for future (2 to 5 years) housing needs when establishing the scope of new projects….Because of the delays between the date of the housing requirements and market analysis and the delivery of units, the Air Force may be building too may homes. Overbuilding in any project could pose a significant risk. Adequate privatization program oversight is essential to help monitor and safeguard the government’s interests and ensure the long-term success of the program. Unless the Navy follows through with its plans to improve its policies and procedures for overseeing its housing privatization program, Navy management will continue to lack assurance that it can become aware of project performance issues in a timely manner. Also, unless DOD streamlines its privatization program evaluation report to focus on key project performance metrics, completes the report on time, and obtains periodic independent verification of key report elements, the report’s value as an oversight tool will continue to be limited. Further, until DOD provides guidance to the services to help ensure consistent collection and reporting of housing satisfaction from all servicemembers, the value of the information to help measure this aspect of the privatization program’s success, as well as the impact of DOD’s overall housing program on quality of life, will also continue to be less useful than it could be. In the long term, if lower than expected occupancy rates and rental revenues at some privatization projects persist, the result could be significantly reduced funds flowing into reserve accounts that were established to provide for future project needs and renovations. In the worst-case scenario, the program could see project financial failures, which could affect the quality of housing available to military families. Such concerns may occur in future privatization projects unless DOD fully considers the impact of increased allowances on housing requirements and implements improvements to its requirements determination process so that the planned size of future projects is reliably determined. We recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense (Installations and Environment) to take the following five actions: Require the Navy to upgrade the monitoring and oversight of its housing privatization program to ensure consistency, completeness, and preparation of appropriate portfolio summary performance reports. Improve the value of DOD’s privatization program evaluation report by streamlining the report to focus on key project performance metrics, completing the report on time, and obtaining periodic independent verification of key report elements. Provide guidance to the services to help ensure consistent collection and reporting of housing satisfaction information from all servicemembers, which would allow for benchmarking and tracking of tenant satisfaction over time as well as for making service-to-service comparisons. Determine how increased housing allowances from the zero-out-of- pocket initiative will most likely impact future family housing requirements and provide guidance on how the impacts should be factored into the services’ housing requirements assessments. Expedite issuance of the revised DOD housing management manual and ensure that the revision includes guidance to improve the reliability of housing requirements assessments and reduce the scope of the exceptions provided to the use of available community housing. In written comments on a draft of this report, the Director for Housing and Competitive Sourcing within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics fully agreed with three and partially agreed with two of our recommendations and stated that shortcomings identified in the report would be forthrightly addressed. Noting that our report was an important contribution to DOD’s oversight of the housing privatization program to date, DOD stated that steps were already underway to streamline the privatization program evaluation report and improve the report’s accuracy. Also, DOD intends to closely observe project vacancy rates in view of the increased housing allowance rates and ensure that the revised housing management manual, now scheduled for completion by the end of calendar year 2006, addresses the housing requirements issues identified in our report. DOD also stated that its privatization program evaluation report was not intended to provide real- time project oversight and that this was the role of the services’ portfolio management systems. Our report does not imply that the evaluation report should provide for real-time project oversight. Nevertheless, because the evaluation report is the department’s primary tool for evaluating the program’s effectiveness, we continue to believe that such a report which focuses on key performance metrics and contains accurate and timely information is important for OSD in carrying out its oversight and effective stewardship of the program. DOD partially agreed with our recommendation that the Navy be required to upgrade the monitoring and oversight of its housing privatization program to ensure consistency, completeness, and preparation of appropriate portfolio summary performance reports. DOD stated that it disagreed with our assumption that, because the Navy did not prepare summary portfolio briefings and the Navy’s input to the privatization program evaluation report contained errors, the Navy was at risk of not being aware of potential problems with projects. DOD also stated that a review of other projects conducted by the Navy and a Navy consultant did not identify issues such as those we identified at the Kingsville II and South Texas projects. However, DOD stated that additional guidance was being developed for internal reviews of audits and financial data from general partners to ensure accurate monitoring and oversight of distributions. Finally, DOD stated that the cost of fire and police services at the South Texas project was not invoiced or reimbursed for 2 years, not for 4 years, as stated in our report. We disagree with DOD’s description of the Navy’s oversight of its housing privatization program and continue to believe that, without improvement, the Navy is at risk of being unaware of potential problems with projects. First, our report notes that in contrast with the Army and the Air Force, the Navy’s oversight program was less structured, included fewer details on project performance, and did not include summary oversight reports on portfolio performance, even though such reports were required by Navy guidance. Also, as noted in our report, the Navy agreed that oversight improvements were needed and had begun conducting a comprehensive review to ensure consistency and completeness, upgrade the monitoring and oversight process, and make oversight responsibilities better defined and, perhaps, more aggressive. Further, we continue to believe that inaccurate project status information reported to OSD for five of the eight Navy and Marine Corps projects we reviewed indicates a lack of adequate oversight and attention to detail. Second, while the Navy and its consultant apparently did not identify issues at other projects, the Navy was developing additional guidance for internal reviews of audits and financial data from general partners to ensure accurate monitoring and oversight of distributions. We believe that this action indicates the Navy has recognized the need for better oversight and also raises the question of why such guidance was not already in place given that the housing privatization program began in 1996. Third, regarding the reimbursement for the cost of fire and police services at the South Texas project, our report contains information provided by top management in the Navy’s housing privatization program and which we revisited with the Navy officials several times over the duration of this review. For example, we posed a written question to the Navy headquarters housing officials in mid- November 2005 in which we reiterated a statement they had previously made to us earlier that Navy never billed the South Texas project for fire and police services, and asked for the status of the issue. On November 22, 2005, an official on the staff of the Assistant Secretary of the Navy for Installations and Environment, without stipulating a set number of years, provided the following written response, “Navy Region South East and the installation are working to resolve this issue. As of November 15, 2005, the project had not been billed for the services.” Subsequently in a December 13, 2005, meeting with Navy privatization program officials, we again discussed this issue and were told that they were working to resolve the issue. On January 25, 2006, a senior Navy housing official told us that the installation had billed the project and had received payment within the last month. When we asked about the month in which the billing occurred, the same official responded 2 days later that “The billing has not yet occurred.” In view of these statements from the top Navy management officials responsible for overseeing the housing privatization program, we believe that DOD’s comment that cost of fire and police services at the South Texas project were not invoiced or reimbursed for 2 years, rather than 4 years, only helps to illustrate our point—that the Navy should be required to upgrade the monitoring and oversight of its housing privatization program. DOD partially agreed with our recommendation that DOD provide guidance to the services to help ensure consistent collection and reporting of housing satisfaction information from all service members, which would allow for benchmarking and tracking of tenant satisfaction over time as well as for making service-to-service comparisons. DOD stated that tenant survey guidance already exists and that it would not be suitable to overlay a programwide directive because of differences among the services in the data they need to help support their specific, negotiated business structures. However, DOD also stated that it would revise its guidance to require consistent use of a 5-point numerical system to measure tenant satisfaction across the services. DOD also agreed that (1) housing preferences should be surveyed for all service members, not simply those occupying privatized housing; (2) it is important to reevaluate servicemember housing preferences driven by increased allowances and housing revitalization; and (3) a panel at the Office of Secretary of Defense has been studying how to best implement such a survey. The intent of our recommendation was not to require the services to use identical questions when assessing tenant satisfaction, but rather to ensure that the services’ methods, questions, and time frames were of sufficient consistency to allow for benchmarking, tracking, or comparing servicemember satisfaction levels. Ensuring that the services use a consistent 5-point numerical system for measuring tenant satisfaction is a step in the right direction. However, we continue to believe that DOD needs to ensure that the services use consistent time frames in order to make maximum use of satisfaction information as a tool to help measure whether or not the privatization program is succeeding in its goal of improving servicemember quality of life. DOD’s comments are reprinted in their entirety in appendix III. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense, Army, Navy, and Air Force; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-5581 or email at holmanb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff members who made key contributions to this report are listed in appendix IV. To determine whether opportunities exist to improve the Department of Defense’s (DOD) oversight of awarded housing privatization projects, we summarized program implementation status and costs, compared the status to DOD’s goals and milestones, and discussed issues affecting program implementation with DOD and service officials. We relied on program status data provided by DOD and the services and confirmed the status data for 12 privatization projects, but we did not otherwise test the reliability of the data. We also obtained, reviewed, and compared DOD and service policies, guidance, and procedures for monitoring implementation and measuring progress in the housing privatization program. We questioned DOD and service officials responsible for the program about how they oversee project performance, how they compare performance with expectations, and what actions they take when performance does not match expectations. We obtained and reviewed applicable oversight reports and assessed the extent to which the reports included key project performance data, trends, and discussion of any performance concerns. We also compared the issue dates of DOD and service oversight reports with the due dates to determine the timeliness of the reports; reviewed and compared the services’ methods and time frames used to measure servicemember satisfaction with privatized housing; and reviewed the results of DOD and service efforts to assess servicemember housing preferences. Further, we visited selected military installations with housing privatization projects to review oversight at the local level, to examine project performance, and to determine whether performance information and concerns were adequately captured in oversight reports and provided to top management in a timely manner. Specifically, we visited Fort Meade, Maryland; Fort Stewart, Georgia; Naval Air Station Corpus Christi, Texas; Naval Air Station Kingsville, Texas; Naval Station San Diego, California; Patrick Air Force Base, Florida; Robins Air Force Base, Georgia; Marine Corps Base Camp Pendleton, California; and Marine Corps Air Station Beaufort, Georgia. These installations were chosen because they contained established privatization projects, represented each of the military services, and a balance of some with and without challenges. Together, the installations contained 12 separate privatization projects. To determine to what extent awarded privatization projects are meeting occupancy expectations, we interviewed DOD and service officials to discuss project occupancy expectations, the factors that contribute to lower than expected occupancy rates, the financial and other impacts that result from lower than expected occupancy rates, and the responses normally taken when occupancy is below expectations. We obtained, reviewed, and analyzed project occupancy rates and trends for all projects awarded as of September 30, 2005, and compared these data to occupancy expectations. We relied on occupancy data provided by the services and did not otherwise attempt to independently determine occupancy rates. Also, for the 12 projects at the installations visited, we reviewed project justification and budget documents to determine each project’s occupancy expectations and compared actual occupancy rates with the expectations. When occupancy rates were below expectations, we reviewed project performance reports and interviewed local officials to determine the causes, consequences, and any actions taken or planned in response. We also reviewed information on the number of privatized family housing units rented to parties other than military families and discussed the associated impacts with service officials. Further, we determined the status of steps taken by DOD in response to previous GAO recommendations to address concerns in the reliability of the services’ housing requirements assessments. We conducted our work from July 2005 through February 2006 in accordance with generally accepted government auditing standards. Table 4 provides details on the 12 housing privatization projects at the installations visited during this review. Barry W. Holman, (202) 512-5581 (holmanb@gao.gov) In addition to the person named above, Mark A. Little, Assistant Director; Janine M. Cantin; Susan C. Ditto; Gary W. Phillips; and Sharon L. Reid also made major contributions to this report.
The Department of Defense (DOD) intends to privatize about 87 percent of the military-owned housing in the United States by 2010. As of December 2005, it had awarded 52 projects to privatize over 112,000 family housing units and had plans to award 57 more projects to privatize over 76,000 more units over the next 4 years. The program, begun in 1996, has become DOD's primary means to improve family housing and to meet its housing needs when communities near installations do not have enough suitable, affordable housing. Because of expressed interest related to the oversight responsibilities of several committees, GAO assessed (1) whether opportunities exist to improve DOD's oversight of awarded housing privatization projects, and (2) to what extent projects are meeting occupancy expectations. Although DOD and the individual services have implemented program oversight policies and procedures to monitor the execution and performance of awarded privatized housing projects, GAO identified three opportunities for improvement. First, the Navy's methods for overseeing its awarded projects have not been adequate to identify and address operational concerns in some projects or to ensure accurate reporting of project information. As a result, there is less assurance that Navy management could become aware of project performance issues in a timely manner in order to plan needed actions to mitigate the concerns. For example, contrary to project agreements, funds from one project had not been deposited to a Navy reserve account to provide for future project needs, and the Navy had not been reimbursed for police and fire protection services provided to another project. Compared to the Navy, the Army and Air Force had more robust and comprehensive methods for overseeing awarded projects and GAO did not find similar oversight concerns in the Army and Air Force projects it reviewed. Second, the value of DOD's primary oversight tool--the semiannual privatization program evaluation report--has been limited because the report lacks a focus on key project performance metrics to help highlight any operational or financial concerns, has not been issued in a timely manner, and does not ensure data accuracy by requiring periodic independent verification of key report elements. Third, data collected on servicemember satisfaction with housing, which is important for benchmarking and tracking of satisfaction levels over time as well as for making service-to-service comparisons, are inconsistent and incomplete because DOD has not issued guidance to the services for standardized collection and reporting of satisfaction information for all servicemembers. Sixteen, or 36 percent, of 44 awarded privatization projects had occupancy rates below expectations with rates below 90 percent, as of September 30, 2005. In an attempt to increase occupancy and keep rental revenues up, 20 projects had begun renting housing units to parties other than military families, including 2,077 units rented to single or unaccompanied servicemembers, retired military personnel, civilians and contractors who work for DOD, and civilians from the general public. Still, rental revenues in some projects are not meeting planned levels, resulting in signs of financial stress. If lower than expected occupancy and rental revenues continue in the long term, the result could be significantly reduced funds available to provide for future project needs and renovations or, in the worst case, project financial failures. Factors contributing to occupancy challenges include increased housing allowances, which have made it possible for more military families to live off base thus reducing the need for privatized housing, and the questionable reliability of DOD's housing requirements determination process, which could result in overstating the need for privatized housing. DOD has yet to implement some previous GAO recommendations to improve the reliability of the requirements assessments supporting proposed projects.
EPA provides financial assistance to a variety of recipients, including states, tribes, and nongovernmental organizations, through assistance agreements such as grants. EPA awards these grants to recipients to meet local environmental priorities and national objectives established in federal law, regulations, or EPA policy. As we have previously reported, most federal grant-making agencies, including EPA, generally follow a life cycle comprising various stages—preaward (announcement and application), award, implementation, and closeout—for awarding grants, as seen in figure 1. The federal laws establishing EPA’s grant programs generally specify the types of activities that can be funded, objectives to be accomplished through the funding, and who is eligible to receive the funding. In addition to these statutory requirements, EPA has issued regulations governing its grants, which may impose additional requirements on recipients. EPA either notifies the public of the grant opportunity or notifies eligible state agencies about available grants, and applicants must submit applications to the agency for its review. In the preaward stage, EPA reviews applications to determine or verify which meet eligibility requirements and awards funding. EPA assigns project officers—who manage the technical and program-related aspects of the grants—and grant specialists—who manage the administrative aspects of grants—in program and regional offices to oversee the implementation stage of the grants. The implementation stage includes development of a grant work plan that outlines EPA and grantee agreed-upon goals, objectives, activities, and time frames for completion under the grant, such as developing certain water quality standards by the end of the year. It also includes payment processing, agency monitoring, and grantee reporting on the results of its individual grant as well as its contribution to program results. For example, results for an individual water quality grant might include the grantee using funds to develop water quality standards, whereas program results might include the grantee’s contribution to the number of water quality permits issued under the program as a whole. Grantees submit information on grant results to EPA through performance reports and progress reports, depending on the grant program. The closeout phase includes preparation of final reports, financial reconciliation, and any required accounting for property. EPA generally awards three types of grants: Formula grants. EPA awards these grants noncompetitively to states in amounts based on formulas prescribed by law to support water infrastructure projects, among other things. For example, grants from the Clean Water and Drinking Water State Revolving Funds can be used to support infrastructure, such as water treatment facility construction, and improvements to drinking water systems, such as pipelines or drinking water filtration plants. According to EPA data, in fiscal year 2015, EPA awarded about $2.25 billion of $3.95 billion (about 57 percent) of grant funds as formula grants, as shown in figure 2. Categorical grants. EPA generally awards these grants—which EPA also refers to as continuing environmental program grants— noncompetitively, mostly to states and Indian tribes to operate environmental programs that they are authorized by statute to implement. For example, under the Clean Water Act, states and tribes can establish and operate programs for the prevention and control of surface water and groundwater pollution. EPA determines the amount of funding each grantee receives based on agency-developed formulas or program-specific factors. In fiscal year 2015, EPA awarded about $1.09 billion of $3.95 billion (about 28 percent) of grant funds as categorical grants, according to EPA data. Discretionary grants. EPA awards these grants—competitively or noncompetitively—to eligible applicants for specific projects, with EPA program and regional offices selecting grantees and funding amounts for each grant. EPA primarily awards these grants to states, local governments, Indian tribes, nonprofit organizations, and universities for a variety of activities, such as environmental research, training, and environmental education programs. According to EPA data, in fiscal year 2015, EPA awarded about $0.513 billion of $3.95 billion (about 13 percent) of grant funds as discretionary grants. EPA also awarded $0.09 billion of $3.95 billion (about 2 percent) of grant funds to special appropriations act projects for specific drinking water and wastewater infrastructure projects in specific communities. Multiple federal and EPA requirements—established in laws and regulations—and EPA guidelines apply to monitoring the results of individual EPA grants and, more broadly, the results of EPA grant programs. The following requirements and guidelines form the basis of how EPA aligns individual grants to achieve the agency’s public health and environmental objectives: Federal laws: Authorizing statutes for certain EPA grant programs, most notably the Clean Water Act, require states—which receive grants from EPA to capitalize state clean water revolving funds—to report annually to EPA on how they have met the goals and objectives identified in their intended use plans for their revolving funds. EPA regulations: EPA regulations require grantees to submit performance reports to EPA as specified in their grant agreements at least annually and typically no more frequently than quarterly. Under EPA’s regulations, the grantee’s performance should be measured in a way that will help improve grant program outcomes, share lessons learned, and spread the adoption of promising practices. Additionally, under EPA’s regulations, the agency should provide grantees with clear performance goals, indicators, and milestones, and should establish reporting frequency and content that allow EPA to build evidence for program and performance decisions, among other things. Agency-wide policies and guidance: EPA policies, such as its environmental results directive, call for grant work plans and performance reports to link to the agency’s strategic plan and include outputs and outcomes. The environmental results directive, the Policy on Compliance, Review, and Monitoring, and related guidance also call for EPA program officials to review interim and final performance reports—or for certain programs, use a joint evaluation process—to determine if the grantee achieved the planned outputs and outcomes, and document the results of these reviews in EPA’s grants management databases. Additionally, the environmental results directive calls for EPA program offices to report on significant grant results through reporting processes established by national program managers, such as data submissions to EPA databases. Program-specific guidance: EPA program offices provide biennial guidance on each program’s priorities and key actions to accomplish health and environmental goals in EPA’s strategic plan. According to EPA officials, this guidance includes annual commitment measures, which guide implementation with EPA regions, states, tribes, and other partners. Many annual commitment measures include regional performance targets, which contribute to meeting EPA annual budget measures, and in turn, long-term strategic measures, according to EPA officials. EPA regional offices use these performance measures and targets to guide their negotiations with grantees on individual grant work plan outputs and outcomes. Grant-specific requirements: EPA incorporates requirements related to grantee reporting frequency, content, and reporting processes (i.e., written performance report, data submissions to an EPA database, or both) into individual grant terms and conditions as part of the final grant agreement. EPA and grantees also negotiate grant-specific outputs and outcomes, which grantees incorporate into their grant work plans. EPA monitors performance reports and program-specific data from grantees to ensure that grants achieve environmental and other program results, but certain practices hinder EPA’s ability to efficiently monitor some results. In addition, we identified a variety of monitoring issues that may hinder EPA’s ability to efficiently identify factors affecting grantee results. According to EPA policies and officials, after EPA approves grantee work plans that identify agreed-upon environmental and other results for each grant, grantees generally report information on their progress and grant results to EPA in two ways: (1) submitting performance reports— generally written—that describe the grantees’ progress toward the planned grant results in their work plans, such as using grant funds to provide technical assistance to local officials, and (2) electronically submitting program-specific data—generally numeric—on certain program measures, such as the number of hazardous waste violations issued, which EPA tracks in various program databases. According to an EPA official, the information streams from grantees differ in that the performance reports go to EPA project officers for the purpose of managing individual grants, whereas EPA program managers use the electronic data to monitor regional and program progress on EPA’s performance measures. Performance reports. At least annually, grantees are to submit performance reports to EPA as specified in their grant agreements. EPA policies include general guidelines about what performance reports should include, such as a comparison between planned and actual grant results, but allow the frequency, content, and format of performance reports to vary by program and grant. For more information on the performance reports we reviewed, see appendix II. According to EPA officials, EPA project officers monitor these reports to review grantee progress toward agreed-upon program results, such as providing outreach to communities about hazardous waste. Project officers conduct two types of routine grants monitoring: (1) baseline monitoring, which is the periodic review of grantee progress and compliance with a specific grant’s scope of work, terms and conditions, and regulatory requirements, and (2) advanced monitoring, which is an in- depth assessment of a grantee or a project’s progress, management, and expectations. EPA assigns a certain number of advanced monitoring reviews to each regional and program office annually. In 2015, OGD assigned program and regional offices to perform advanced monitoring for at least 10 percent of their active grantees, which program and regional offices select based on criteria such as the size of the grant and the experience level of the grantee, among others. EPA project officers document the results of their monitoring—for example, whether grantees have made sufficient progress and complied with grant terms and conditions—in EPA’s grants management databases at least annually. Based on their baseline monitoring review, EPA project officers may impose more frequent or intensive grant monitoring, such as advanced monitoring, to address any identified concerns. According to EPA data, project officers recommended additional grant monitoring for 78 out of 2,987 reviews (about 3 percent) in 2015. Additionally, program and regional offices summarize any significant grants management- related observations or trends from their advanced monitoring reviews as part of their annual postaward monitoring plans. Program-specific information. According to program officials, grantees also electronically submit program-specific information—generally numeric data—on certain results, such as the acres of brownfield properties made ready for reuse. According to EPA policy and program officials, program officials monitor these data to track and report program accomplishments, at the regional and agency levels, and, as applicable, to assess the agency’s progress meeting its performance measure targets in support of agency strategic goals. According to EPA officials, generally grantees or EPA program officials—depending on the database—are to enter grant results, such as the number of enforcement actions, into EPA’s program-specific data systems at agreed-upon intervals, such as quarterly. These requirements may be part of a grant’s terms and conditions. EPA Performance Measures and Data Systems The number of performance measures and data systems that the Environmental Protection Agency (EPA) uses to collect and analyze data on environmental and other program results in 2016—including incorporating performance data from grantees as relevant—varies across the three program offices we reviewed. For example, the Office of Water collects or analyzes grantee data on results for 13 of its 15 grant programs using 20 data systems, and integrates the results as appropriate into its reporting on 111 annual commitment measures. The Office of Land and Emergency Management collects or analyzes grantee data on results for 10 of its 13 grant programs using 4 systems, and integrates the results as appropriate into its reporting on 34 annual commitment measures. The Office of Air and Radiation collects or analyzes grantee data on results for 7 of its 9 grant programs using 3 systems, and integrates the results as appropriate into its reporting on 54 annual commitment measures. According to EPA officials, there is not always a direct link between individual grantee results and EPA’s annual budget and annual commitment performance measures. However, officials told us that each regional or program office considers information from its program- specific data systems that is relevant to program- or agency-level performance measures, interprets it, and enters the results as appropriate into EPA’s national performance tracking systems. For example, Office of Water officials use data collected from grantees in its Drinking Water National Information Management System database to report annually in EPA’s national performance tracking system the number of Drinking Water State Revolving Fund projects that have started operations. EPA officials said that reporting grant and program results to EPA has improved over time, as EPA has transitioned from collecting data in hard copy and expanded electronic reporting by grantees. Additionally, officials we spoke with from several states said that electronic reporting had certain benefits. EPA officials told us that collecting certain information electronically from grantees allows EPA to access and analyze grant and program results more efficiently than it can for results collected in a written format, because EPA officials do not have to manually enter information into a data system for analysis. Additionally, in response to information-sharing problems—such as incompatible computer systems, manual data entry, and differing data structures across program offices— EPA and the Environmental Council of States formed the Environmental Information Exchange Network (Exchange Network) in 1998, an information-sharing partnership that uses a common, standardized format so that EPA, states, and other partners can share environmental data across different data systems. As a result, EPA and its partners may access and use environmental data more efficiently, according to Exchange Network documents. For example, officials we interviewed from each of the eight state environmental agencies we reviewed said that they use the information they collect for EPA to either manage their programs or inform the public. Additionally, even with some technical issues with individual databases, officials from six of these eight agencies said that electronic reporting has several benefits, such as improving data timeliness, greater efficiency, and reduced administrative burden. Furthermore, based on our review of agency policy, analysis, and planning documents, we found that current and past EPA initiatives have taken steps to reduce the reporting burden on grantees and others. For example: Since 1996 EPA has been authorized to issue performance partnership grants, which allow states, Indian tribes, interstate agencies, and intertribal consortia grantees to combine funds from certain EPA grant programs into a single grant. EPA designed this system to provide grantees with greater flexibility to address their highest environmental priorities and reduce administrative burden and costs, among other objectives. In 2015, EPA issued a policy to increase awareness and encourage the use of these grants. In 2008, EPA issued a policy to reduce reporting burdens for states awarded grants under 28 grant programs by establishing general frequencies for grant work plan progress reports and specifying that EPA regional offices could only require more frequent progress reports in certain circumstances. In 2012, EPA’s OGD contracted with external experts to review its grants management processes and identify improvements as part of EPA’s Grants Business Process Reengineering Initiative. This initiative seeks to streamline and standardize the grants management process at EPA and develop an improved business process to be implemented through EPA’s new grants management data systems. The study identified several potential high-level improvements, such as reducing manual activities and expanding standardization in documents to ensure greater consistency and reduce administrative burden. In 2013, EPA and states established a leadership council for E- Enterprise for the Environment—a joint initiative to streamline and modernize business processes shared between EPA and regulatory partners, such as states, and reduce reporting burden on regulated entities, among other goals. For example, in 2015, EPA and states initiated the Combined Air Emissions Reporting project, which seeks to streamline multiple emissions reporting processes at the federal, state, and local levels, according to EPA’s website. The project will establish a single, authoritative data repository that will reduce the industry and government transaction costs for reporting and managing emissions data through features such as autopopulated forms and data sharing across regulatory agencies. In 2015, EPA finalized an electronic reporting rule that requires, among other things, states that receive grants to issue National Pollution Discharge Elimination System permits to substitute electronic reporting for paper-based reports, saving time and resources for states, EPA, and permitted facilities. According to an EPA economic analysis, when fully implemented, the new rule will eliminate 900,000 hours of reporting across regulated entities and state agencies. According to EPA’s fiscal year 2017 budget, the agency plans to further reduce the reporting burden by 1 million hours by the end of fiscal year 2017. In 2016, EPA’s OGD issued its 2016-2020 Grants Management Plan, which includes several streamlining efforts specific to grants. For example, under Goal 2: Streamline Grants Management Procedures, EPA plans to evaluate its grants management processes and assess opportunities to streamline its procedures. Under this goal, EPA also plans to provide a mechanism for staff to submit feedback about existing burdens and new requirements or procedures. Furthermore, under Goal 4: Ensure Transparency and Accountability and Demonstrate Results, EPA plans to improve its process for monitoring grants and will collect input from external stakeholders, such as states and grantees, about how to address burdens. Based on our review of the three program offices that award the majority of EPA grant funding, we found that certain EPA monitoring practices in these offices hinder EPA’s ability to efficiently monitor some results and may increase EPA’s and grantees’ administrative burden. First, EPA collects a variety of information about grant results, but some of the information is not readily accessible. Second, EPA collects certain information from grantees twice, once in a written report and once in an electronic database. Third, one program office transfers data relevant to its annual performance measures from its program-specific databases to EPA’s national database manually rather than electronically. EPA officials and officials from several state environmental agencies who we interviewed said that these practices increase their administrative burden. EPA collects a variety of information about grant results through grantee performance reports and program-specific databases. However, some of the information was not readily accessible to project officers or grantees. Based on our review of performance reports across 23 grant programs, we found that the types of results that grantees reported, such as data collection and management, covered a variety of topics and were generally similar across programs, as shown in table 2. Additionally, we found that grantees electronically report a variety of information about grant results to program-specific databases, such as enforcement actions and environmental benefits of water infrastructure projects. However, only some of the information reported by grantees was readily accessible, either to the public through user-defined searches on EPA’s website or to grantees through accessing an EPA database directly. This is because the information in grantees’ performance reports is stored as file attachments to database records and EPA’s legacy grants management databases do not have the capability to search data stored in this format. For instance, a program manager that wanted to obtain information on the number and types of training activities funded by a particular grant program—and that are not reported to a program-specific database—would need project officers to open each performance report individually and manually review it for relevant information. OGD officials told us that—depending on the availability of funds—they plan to develop a web-based portal for grantees to submit documents, including their performance reports, centrally as part of their new grants management database. Under EPA’s regulations, grantee performance should be measured in a way that will help improve grant program outcomes, share lessons learned, and spread the adoption of promising practices. EPA has procedures in place to collect this information through its program-specific databases and performance reports. However, we have previously found that for performance information to be useful, it should meet users’ needs for consistency, relevance, accessibility, and ease of use, among other attributes. EPA’s 2014 internal analysis of its grants management business processes identified improvements that if implemented into EPA’s planned web-based portal, could improve the accessibility and usefulness of information in grantee performance reports for EPA, grantees, and other users. For example, the analysis found that incorporating expanded search capabilities into EPA’s new grants management database, such as keyword searches, could improve users’ access to relevant information. However, it is unclear to what extent, if at all, these features will be applied to the web-based portal because the high-level analysis does not specify how performance reports will be stored and accessed through the web-based portal. Because EPA, grantees, and other users cannot readily access information in performance reports about grant results and how different grantees achieve them, these reports are less useful for sharing lessons learned and building evidence for demonstrating grant results. Making the information that EPA collects in these reports more accessible by incorporating expanded search capability features, such as keyword searches, into its proposed web-based portal for collecting and accessing performance reports, could improve its usefulness to EPA and grantees in identifying successful approaches to common grantee challenges. Additionally, improved accessibility could facilitate EPA’s ability to assess and report environmental and program results achieved through its grants by reducing the need to manually open and review each performance report to identify relevant information. EPA collects certain information from grantees twice—once in a written report and once in an electronic database—and in some cases, we found varying degrees of overlap between the content of the performance reports and program-specific databases that we reviewed. Specifically, of the performance reports we reviewed across 23 grant programs, we found that one or more grantee performance reports included information that grantees also report to EPA through a program-specific database for 12 programs, as shown in table 3. For 10 of these programs, the content in 15 of the performance reports we reviewed had some overlap with data submitted through relevant program-specific databases, and for 5 of the programs, 12 reports we reviewed had substantial overlap. For more information on the program-specific databases we reviewed, see appendix II. Additionally, officials we interviewed from five of the eight state environmental agencies we reviewed confirmed that under current reporting requirements, they reported the same information to EPA twice—once electronically and once in a written performance report, which increased their administrative burden. Specifically, these state officials provided the following examples: Much of grantee reporting for the Clean Water State Revolving Fund—information reported electronically to EPA—is also reported separately in the written state revolving fund annual performance report. Grantees report the same activities in the Public Water System Supervision program that they report separately to EPA’s state revolving fund databases for the state program set-asides, funded by the Drinking Water State Revolving Fund. Under the State Hazardous Waste Management Program, EPA calls for grantees to include permitting, compliance, enforcement, and corrective action activities and accomplishments—already reported to EPA electronically—in their performance reports. Because of different programmatic and reporting needs for water program grants, officials often find themselves reporting the same data multiple times in different formats. Grantees submit data on actions to address nonpoint source pollution to EPA electronically throughout the year—which grantees also report separately to EPA in the annual performance reports for Nonpoint Source Pollution Grants, as required by the Clean Water Act. Officials we interviewed from five of the eight state environmental agencies said that EPA could work with states to evaluate how grantees report and further streamline reporting and data collection. Officials we interviewed from one state agency said that with limited resources, they have no capacity for additional reporting requests, without some modification to reporting schedules or simplification of the reporting process. According to EPA officials, EPA’s reporting process has evolved over time in response to statutory changes, such as amendments to the Government Performance and Results Act of 1993—which generally requires that agencies develop performance goals that are expressed in objective, quantifiable, and measurable form and annually report on their performance in meeting those goals. Additionally, to facilitate grantees’ timely reporting and access to environmental data, EPA and its partners have expanded electronic reporting to program-specific databases through the Exchange Network data-sharing partnership with states and others, according to EPA and Exchange Network documents. Furthermore, EPA officials told us that collecting information in both written performance reports and program-specific databases is beneficial because the information serves different purposes. Specifically, EPA officials said that performance reports are designed to provide project officers with information in the format they need for monitoring grantee progress, for example, narrative information on grantee activities to achieve results. Similarly, program-specific databases are designed to provide program managers with information in the format they need for monitoring program progress, for example, information that will allow them to report national-level results. However, officials from two of the three program offices we reviewed said that project officers either currently used, or could use, data within some program-specific databases to help monitor grantee progress. Because EPA collects certain information in both performance reports and program-specific databases for 12 of the programs we reviewed, some grantees have an increased administrative burden, which may result in fewer resources dedicated to activities that directly protect human health and the environment. Our prior work and EPA analyses of its business processes have shown that duplication of efforts can increase administrative costs and reduce the funds available for other priorities. By identifying grant programs where existing program-specific data reporting requirements can meet EPA’s performance reporting requirements for grants management purposes, the agency can help reduce duplicative reporting for grantees in a manner consistent with EPA’s ongoing streamlining efforts. Because one program office we reviewed, the Office of Water, transfers certain data relevant to program results from its program-specific databases to EPA’s national database manually, this office does not benefit from greater data quality control, accessibility, and administrative efficiencies reported by another program office that electronically transfers data relevant to program results. Specifically, the Office of Land and Emergency Management transfers data relevant to most of its annual commitment measures from its program-specific databases to EPA’s national database electronically, using EPA’s Performance Assessment Tool business intelligence software. According to Office of Land and Emergency Management officials, the software provides several advantages to manual data transfer, including improved accuracy, efficiency, the ability to trace data between the different data systems, and improved data accessibility for EPA program managers. In contrast, the Office of Water manually transfers data relevant to its annual commitment measures from its program-specific data systems to EPA’s national performance database—the Budget Automation System— using a spreadsheet. According to Office of Water officials, they are not currently planning to develop the capability to transfer data electronically because EPA is in the process of replacing its Budget Automation System with a new system. Instead, these officials said that the office is using other technology tools—such as collaboration software—to make the data transfer within EPA more efficient and reduce errors. However, an Office of Water official acknowledged that the quality assurance process for data transferred manually is lengthy. Standards for Internal Control in the Federal Government states that control activities can be implemented in either an automated or a manual manner but that automated control activities tend to be more reliable because they are less susceptible to human error and are typically more efficient. Furthermore, EPA planning documents and analyses demonstrate the potential benefits of improving efficiency in government operations by using automated control activities, such as reduced administrative burden and cost savings. However, by transferring data from its program-specific databases to EPA’s agency-wide system manually, the Office of Water does not benefit from the greater data quality control, accessibility, and administrative efficiencies available from electronic transfer of data. By adopting software tools, as appropriate, to electronically transfer relevant data on program results from program- specific databases to EPA’s new national performance system, the Office of Water could reduce its administrative burden. Our review of 49 written performance reports across 23 grant programs identified a variety of monitoring issues related to EPA’s environmental results directive. First, we found that project officers may interpret EPA’s environmental results directive differently because the directive is unclear. Second, in some cases, grantees did not include references to the agreed-upon outputs and outcomes from their work plan to demonstrate progress achieving planned results. Third, because grantees submit performance reports in a written format, there are no built-in quality controls to ensure these reports’ consistency with EPA’s directive. Each of these issues may have contributed to the inconsistencies we found in the reports we reviewed. Inconsistencies in grantee reports may make it more difficult for EPA project officers to efficiently identify or report patterns in factors affecting grantee’s achievement of their agreed-upon results. We found that individual project officers may be interpreting EPA’s environmental results directive differently because the directive is unclear. Specifically, we found that reports’ consistency with the directive varied by grantee and across some of the grant programs we reviewed. One reason for these variations may be that project officers have different interpretations of EPA’s directive, as the directive does not provide specific criteria for evaluating performance reports’ consistency. EPA’s environmental results directive establishes EPA’s policy to ensure that grant outputs and outcomes are appropriately addressed in grantee performance reports, to the maximum extent practicable. Specifically, it calls for program offices to review performance reports and determine whether the grantees achieved the environmental or other outputs and outcomes in their grantee work plans, which includes assessing whether grantee explanations for unmet outputs or outcomes are satisfactory. According to the directive, the results of this review should be included in EPA’s official project file for each grantee. However, the directive does not specify what factors the project officers who manage grants should consider when determining whether the grantees’ addressing of outputs and outcomes in their performance reports is appropriate. Based on our review of performance reports, we found that the level of detail in grantees’ descriptions of how they addressed grant outputs and outcomes varied across the reports we reviewed. For example, some grantees reported completing or providing training activities without including additional information on the topic, date, or number of attendees. In contrast, other grantees provided specific information on training, such as which employees attended training, the various courses, and dates of classes. Similarly, the directive does not specify what factors project officers should consider when determining whether a grantee’s explanation for an unmet output or outcome in a performance report is satisfactory. For example, we found that 17 of 49 (about 35 percent) grantee performance reports were consistent with EPA’s directive because they included explanations for each outcome they did not achieve, and 20 of 49 (about 41 percent) grantee performance reports were partially consistent with the directive because they did not include explanations for all missed outcomes. For the remaining 12 grantee performance reports (24 percent), we could not determine whether the reports were consistent with EPA’s environmental results directive because they did not include any references to the agreed-upon outputs and outcomes from the grantee work plan. (See table 4.) According to federal standards for internal control, management should implement control activities through policies. Additionally, these standards state that each unit within an agency also is to document policies in the appropriate level of detail to allow management to effectively monitor the control activity. With its environmental results directive, EPA has implemented certain control activities through its policy to help ensure that grantee performance reports appropriately address planned results from grantee work plans. However, the inconsistencies we found in our review of performance reports may indicate that the guidelines within EPA’s environmental results directive may not be at a sufficient level of detail for EPA to effectively monitor its implementation. By clarifying its directive or guidance to discuss the factors project officers should consider when determining whether reports appropriately address planned results and include satisfactory explanations for unmet results, EPA would have better assurance that project officers are implementing its environmental results directive consistently. In turn, implementing its directive consistently may help EPA demonstrate the achievement of environmental results from its grants, and also help project officers better identify or report patterns in factors that are affecting grantees’ achievement of planned results. For 12 of the 49 (24 percent) performance reports we reviewed, grantees did not include references to the agreed-upon outputs and outcomes from their work plan to demonstrate progress in achieving planned grant results. Because some grantees did not include information from their work plans in their performance reports, we could not determine whether these grantees achieved their planned results or provided explanations for any results they did not achieve, in accordance with EPA’s environmental results directive (see table 4). To assess these grantees’ progress, the project officer managing the grant would have to manually compare the information in each grantee’s performance report against the grantee’s work plan to determine if the actual results matched the planned results. During a 2010 EPA-contracted review of performance reports’ consistency with EPA’s environmental results directive, the contractor identified the same issue with several performance reports. Specifically, although the contractor found that 147 out of 157 (about 94 percent) performance reports were greater than 60 percent consistent with EPA’s directive, for 55 of these performance reports, the contractor determined their consistency by inference because the performance reports did not contain explicit linkages to planned outcomes within the grantee work plans. Consequently, to improve the consistency of performance reports with EPA’s environmental results directive, the contractor recommended that EPA consider encouraging grantees to more clearly label the planned outputs and outcomes from their work plans in their performance reports. In fiscal year 2013, EPA implemented a policy for certain categorical grant programs that calls for grantee performance reports to include certain elements, including an explicit reference to the planned results in the work plan and projected time frame. However, this policy does not apply to all EPA grants, including formula grants and other categorical grants. Expanding aspects of this policy, specifically, the call for performance reports to include an explicit reference to the planned results in the work plan and projected time frames, could achieve several benefits identified in the 2010 review. By increasing the extent to which grantees clearly label the planned results from their work plans in their performance reports, EPA would facilitate project officers’ review of grantee progress, reduce the subjectivity of the review, and increase transparency between EPA and grantees about planned grant results. Because grantees generally submit written performance reports, there are no built-in data quality controls, such as those for certain electronic reporting formats, to ensure that these reports are consistent with EPA’s environmental results directive. In contrast, we found that some of EPA’s program-specific databases include built-in quality controls, such as required fields, drop-down menus, or other data entry rules designed to ensure that the information entered is complete, accurate, and consistent. Because there are no built-in quality controls for written performance reports, EPA project officers must manually review each performance report to determine consistency with EPA’s directive. An OGD official told us that OGD plans to develop a web-based portal for grantees to submit documents, including their performance reports, electronically as part of its new grants management database. However, the business process analysis underlying the web-based portal feature of the new database does not specify whether these reports would continue to be uploaded by grantees as attachments or input directly into an application with built-in data quality controls, such as required fields, to ensure consistency with EPA’s directives. The OGD official said that the office will not explore options for the web-based portal, including a timeline, until it has migrated from the old database to the new system, which it expects to complete in fiscal year 2018. According to federal standards for internal control, control activities may be manual or automated. EPA has manual control activities for implementing its environmental results directive, which is consistent with these standards. However, a 2014 analysis of EPA’s grants management business processes found that EPA relied heavily on manual processes and could incorporate several improvements into its new grants management database system, including using electronic templates to increase information consistency and reduce the administrative burden of manual activities. By incorporating built-in data quality controls for performance reports into its planned web-based portal, EPA could improve these reports’ consistency with the environmental results directive and potentially reduce project officers’ administrative burden in performing manual reviews. Furthermore, improved consistency in performance reports could help EPA project officers to more efficiently identify or report patterns in factors that are affecting grantees’ achievement of their agreed-upon results. EPA has adopted a number of good practices for monitoring environmental and other program results from the nearly $4 billion dollars it distributes each year in grants, in part to implement environmental statutes and regulations. Furthermore, EPA continues to pursue opportunities to streamline its processes and reduce the reporting burden for regulated entities and grantees. Yet certain monitoring practices— collecting some grant results in a format that is not accessible, collecting some information from grantees twice, and manually transferring data between databases—increase EPA and grantees’ administrative burden in monitoring and reporting environmental and program results. By incorporating expanded search capability features, such as keyword searches, into its proposed web-based portal, EPA can improve the accessibility of information in grantees’ performance reports and make them more useful for sharing lessons learned and building evidence for demonstrating grant results. In addition, by identifying grant programs where existing program-specific data reporting can meet EPA’s performance reporting requirements for grants management purposes, the agency can eliminate duplicative reporting by grantees in a manner consistent with EPA’s ongoing streamlining efforts. Furthermore, by adopting software tools, as appropriate, to electronically transfer relevant data on program results from program-specific databases to EPA’s new national performance system, the Office of Water could reduce its administrative burden. EPA has also implemented certain internal controls, such as its environmental results directive, to ensure that grantees achieve the environmental and other planned results in their work plans. However, we identified a variety of monitoring issues related to EPA’s environmental results directive—such as unclear guidance, the omission of references to planned results in performance reports to document progress, and written grantee performance reports that do not have built-in quality controls— that may undermine these efforts. By clarifying its directive or guidance to discuss the factors project officers should consider when determining whether performance reports are consistent with EPA’s environmental results directive, EPA would have better assurance that project officers are implementing its directive consistently. In addition, expanding aspects of EPA’s policy for certain categorical grants, specifically, the call for performance reports to include an explicit reference to the planned results in grantees’ work plans and their projected time frames for completion to all grants, would among other things facilitate project officers’ reviews of grantee progress results. Finally, by incorporating built-in data quality controls for performance reports into its planned web-based portal, EPA could improve these reports’ consistency with the environmental results directive and potentially reduce project officers’ administrative burden in performing manual reviews. We recommend that the EPA Administrator direct OGD and program and regional offices, as appropriate, as part of EPA’s ongoing streamlining initiatives and the development of a grantee portal, to take the following six actions: Incorporate expanded search capability features, such as keyword searches, into its proposed web-based portal for collecting and accessing performance reports to improve their accessibility. Identify grant programs where existing program-specific data reporting can meet EPA’s performance reporting requirements for grants management purposes to reduce duplicative reporting by grantees. Once EPA’s new performance system is in place, ensure that the Office of Water adopts software tools, as appropriate, to electronically transfer relevant data on program results from program-specific databases to EPA’s national performance system. Clarify the factors project officers should consider when determining whether performance reports are consistent with EPA’s environmental results directive. Expand aspects of EPA’s policy for certain categorical grants, specifically, the call for an explicit reference to the planned results in grantees’ work plans and their projected time frames for completion, to all grants. Incorporate built-in data quality controls for performance reports into the planned web-based portal based on EPA’s environmental results directive. We provided a draft of this report to EPA for its review and comment. In its written comments, reproduced in appendix III, EPA stated that it agreed with our findings and six recommendations. EPA also provided technical comments, which we incorporated into the report as appropriate. EPA agreed with our recommendation that the agency incorporate expanded search capability features into its proposed web-based portal for performance reports and stated that incorporating such features would enable easier access to performance report information. EPA also noted that the web-based portal is a long-term initiative, subject to the agency’s budget process and replacement of its existing grants management system, which the agency expects to complete in fiscal year 2018. EPA generally agreed with our recommendation that the agency identify grant programs where existing program-specific data reporting by grantees can also meet EPA’s separate performance reporting requirements, to reduce duplicative reporting by grantees. EPA stated that it will work with recipient partners to identify where duplicative reporting can be reduced and anticipates completing this effort by the end of fiscal year 2017. However, EPA noted that program-specific data cannot be relied upon to meet all of the agency’s grants management needs and that performance reports often contain other information that allows EPA project officers to monitor a recipient’s progress in meeting work plan commitments, which cannot be gleaned from output data entered into the agency’s program-specific tracking systems. Additionally, EPA said that not all project officers have access to program-specific databases which would require the agency to consider expanding project officer access to those databases to enhance grant performance monitoring. EPA agreed with our recommendation that the agency ensure that the Office of Water adopts software tools to electronically transfer relevant data from program databases to EPA’s national performance system, as appropriate. EPA stated that it will also apply this recommendation to all program-specific databases—not just Office of Water databases—where appropriate and cost-effective. EPA also noted that in some cases, not all data from program-specific databases may be appropriate for direct electronic transfer because some individual grant data may need to be analyzed before being summarized at the national level. EPA agreed with our recommendation that EPA clarify the factors project officers should consider when determining whether performance reports are consistent with EPA’s environmental results directive. EPA stated it will modify the implementation guidance for the directive in fiscal year 2017. EPA agreed with our recommendation that EPA expand aspects of EPA’s policy for certain categorical grants, specifically, the call for an explicit reference to the planned results in grantee work plans and their projected time frames for completion, to all grants. EPA stated it will revise the existing policy in fiscal year 2017. EPA generally agreed with our recommendation that the agency incorporate built-in quality controls for performance reports into the planned web-based portal based on EPA’s environmental results directive. However, EPA noted that identifying and deploying the appropriate data quality controls is a long-term effort subject to budgetary considerations, completion of the agency’s replacement of its existing grants management system, and extensive collaboration with internal and external stakeholders. EPA also stated that full achievement of built-in quality controls, such as electronic templates, as envisioned in the draft report would require standardized work plan and performance report formats subject to clearance from the Office of Management and Budget. Additionally, EPA noted that grant recipients and EPA program offices have considered but generally not supported standardizing work plans and performance reports in the past. As a first step in implementing this recommendation, EPA stated that it would seek feedback from the recipient and program office community and will initiate this process in fiscal year 2017. We recognize that EPA has considered standardizing work plans and performance report formats in the past, and we reviewed the agency’s 2009 “lessons learned” analysis as part of this report (see footnote 29, page 15). We are not recommending that EPA repeat its previous effort and develop a template with standardized program-specific measures to improve reports’ consistency. Specifically, implementing built-in quality controls for performance reports in EPA’s web-based portal would not necessarily require grantees to measure and report the same information across grants. For example, EPA could design an electronic template that follows the guidelines of its existing policies for work plans and performance reports—such as allowing grantees and EPA to negotiate appropriate outputs and outcomes for each grant. If grantees entered their grant-specific outputs and outcomes directly into EPA’s web-based portal as an electronic version of their work plan, the portal could use the information to prepopulate an electronic performance report and reduce manual data entry. Additionally, the electronic performance report could include required fields, such as an explanation field, if the grantee did not meet a particular output or outcome from its work plan. We continue to believe that such controls would improve the consistency of grantee performance reports with EPA’s environmental results directive, and that both EPA project officers and grantees could benefit from the reduced administrative burden associated with submitting and reviewing performance reports electronically. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines (1) how the Environmental Protection Agency (EPA) awards grants, (2) the federal and EPA requirements and guidelines for monitoring grant and program results, and (3) how EPA monitors its grants to ensure that environmental and other program results are achieved. To examine how EPA awards grants and the federal and EPA requirements and guidelines for monitoring grant and program results, we reviewed relevant federal laws, regulations, and EPA’s policies and guidance for awarding and monitoring grants. Additionally, we reviewed our prior work on grants management. We also spoke to officials from EPA’s Office of Grants and Debarment (OGD) about how EPA awards grants and EPA’s policies for monitoring grants, and the three program offices that award the majority of EPA grant dollars—the Office of Water, Office of Land and Emergency Management, and Office of Air and Radiation—about EPA program-level guidance for monitoring grant results. To examine how EPA monitors its grants to ensure that environmental and other program results are achieved, we reviewed EPA’s monitoring processes for grants in the three program offices that award the majority of EPA grant dollars. We identified 45 grant programs awarded by the three program offices, from the Catalog of Federal Domestic Assistance, a clearinghouse for information on federal grant opportunities. We identified an initial list of program-specific databases for the grant programs using information from EPA and its partners’ Environmental Information Exchange Network and EPA’s Central Data Exchange websites. For each grant program we identified, we requested information from EPA program offices, including any corrections to the list of grant programs and associated program-specific databases, whether EPA or grantees enter data into the databases, and how grantees submit data. For these 45 programs, we searched EPA’s Integrated Grants Management System and State Grant Information Technology Application for relevant performance reports. Based on our search results, we selected a nongeneralizable sample of 49 performance reports across 23 grant programs using the following criteria: (1) whether a performance report was electronically available, (2) whether different EPA regions were represented, (3) whether the grantee was a state grantee that we had interviewed, and (4) whether other documentation—such as an EPA routine monitoring report—was available. Although the results of our review cannot be projected agency- wide because our sample was nongeneralizable, the performance reports represent a broad array of grant programs and include grantees in each EPA region. For each of the 23 grant programs for which we obtained a report, we also collected information on the program-specific database associated with the program, as applicable. We collected information on the content of EPA’s program-specific databases from the Environmental Information Exchange Network and the Central Data Exchange websites, EPA documents collected by a prior GAO team, and EPA’s internal and external websites. Two analysts reviewed the reports and coded them in the following ways: (1) type of content and format of the report, (2) degree of consistency with EPA’s environmental results directive, and (3) degree of overlap between the content of the performance reports and information collected from grantees in EPA’s program-specific databases. To ensure consistency in our review, each analyst reviewed the other’s work and resolved any differences. To describe the grant results reported in performance reports, we reviewed the content of the performance reports we collected and developed nine mutually exclusive categories of information that grantees typically provide to EPA in these reports. To determine performance reports’ consistency with EPA’s environmental results directive, we reviewed each report against the directive’s call for EPA to review performance reports to (1) determine whether the grantees achieved the planned outputs and outcomes in their work plans and (2) explain any unmet outputs and outcomes. From this review, we developed four categories: 1. Consistent—the report describes progress against outputs or outcomes from the grantee’s work plan and explains all missed targets, if any. 2. Partially consistent—the report includes progress against some, but not all, outputs or outcomes from the grantee’s work plan or explains some, but not all missed targets, if any. 3. Not consistent—the report does not describe progress against outputs or outcomes from the work plan. 4. Could not determine—the report describes grantee activities without an explicit reference to outputs or outcomes from the work plan to demonstrate progress or to allow a reviewer to identify missed outputs or outcomes requiring explanations. We did not review any other documentation from EPA’s official project file or grants management databases, which is consistent with the methodology described in a 2010 EPA-contracted study examining performance reports’ consistency with EPA’s environmental results directive. To determine whether grantees reported the same information to EPA twice, we reviewed the content of the performance reports and compared the report content against the information we collected describing data elements in EPA’s program-specific databases for that grant, as applicable. Based on this review, we created four categories of overlap between the report content and the data fields in EPA’s databases: 1. No overlap—no matches between content. 2. Minimal overlap—one to two matches between content. 3. Some overlap—three to five matches in content. 4. Substantial overlap—six matches or more between content. We interviewed officials from EPA’s OGD, Office of Water, Office of Air and Radiation, Office of the Chief Financial Officer, Office of Land and Emergency Management, and lead regional offices for certain programs to discuss EPA’s processes for monitoring environmental and other program results from grants. We also provided program offices with a standard set of follow-up questions about how they collect and monitor environmental and other program results from grantees. Additionally, we interviewed representatives from the Environmental Council of States— an association of state environmental agency leaders—and a nongeneralizable sample of officials from environmental agencies in eight states—California, Hawaii, Maryland, Michigan, New York, North Carolina, Pennsylvania, and West Virginia—to obtain their perspectives on EPA’s monitoring processes for grants. We selected these eight states because they received the greatest amount of funding from the federal government, according to an Environmental Council of States’ analysis of state environmental budgets data in 2012, the most recent publicly available data. The results of our interviews with officials from these agencies cannot be generalized to those of states not included in our review. We conducted this performance audit from August 2015 through July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 5 summarizes the scope of our review of grantee performance reports. Table 6 provides information on which program-specific databases we reviewed. In addition to the contact named above, Michael Hix (Assistant Director), Amy Bowser, Cindy Gilbert, Taylor Hadfield, Thomas James, Benjamin Licht, Kimberly McGatlin, Alison O’Neill, Danny Royer, Jeanette Soares, Sara Sullivan, Kiki Theodoropoulos, and Lisa Van Arsdale made key contributions to this report.
Grants comprised about half of EPA's budget in 2015, or about $4 billion. Through several grant programs, EPA headquarters and 10 regional offices award these grants to a variety of recipients, including state and local governments. EPA provides guidance through directives that seek to ensure the appropriate use of funds and achievement of environmental results or public health protection, among other purposes. GAO was asked to review how EPA monitors environmental and other grant results. This report examines (1) how EPA awards grants, (2) the federal and EPA requirements for monitoring grant and program results, and (3) how EPA monitors its grants to ensure that environmental and other program results are achieved. GAO analyzed relevant federal laws, regulations, and EPA guidance; reviewed processes for ensuring that environmental results are achieved for the three EPA program offices that award the majority of EPA grant dollars; and interviewed EPA officials and officials from eight state environmental agencies—selected based on the amount of environmental funding they receive from EPA. The Environmental Protection Agency (EPA) generally awards three different categories of grants: formula, categorical, and discretionary. According to EPA data, in fiscal year 2015, EPA awarded the majority of its grant funds— $2.25 billion of $3.95 billion (57 percent)—as formula grants, primarily to states to support water infrastructure based on funding formulas prescribed by law. EPA awarded $1.09 billion (about 28 percent) of its grant funds as categorical grants. These grants were generally awarded noncompetitively, mostly to states and Indian tribes to operate environmental programs. EPA determines the amount of funding each grantee receives based on agency formula or program factors. EPA awarded $0.513 billion (about 13 percent) in discretionary grants for specific activities, such as research. EPA also awarded $0.09 billion (2 percent) in grant funds to special appropriations act projects for specific drinking water and wastewater infrastructure projects in specific communities. Multiple federal and agency requirements and guidelines apply to monitoring grant and grant program results. For example, under EPA regulations, grantees must submit performance reports to EPA at least annually. EPA policies and guidance, such as its environmental results directive, call for EPA program officials to review performance reports to determine if the grantee achieved the planned results and for program offices to report on significant grant results through other processes, such as submissions to EPA databases. EPA incorporates requirements related to grantee reporting frequency, content, and reporting processes into grant terms and conditions. EPA monitors performance reports and program-specific data from grantees to ensure that grants achieve environmental and other program results. However, GAO found that certain practices may hinder EPA's ability to efficiently monitor some results and increase administrative burden. For example, EPA collects some information from grantees twice—once in a performance report and once in a database—because EPA uses the information for different purposes. GAO's prior work and EPA analyses have shown that duplication of efforts can increase administrative costs and reduce the funds available for other priorities. By identifying grant programs where existing data reporting can meet EPA's performance reporting requirements, the agency can help reduce duplicative reporting for grantees. Also, GAO's review of grantee performance reports found issues that may hinder EPA's ability to efficiently identify factors affecting grantee results. For example, because grantees submit performance reports in a written format, there are no built-in quality controls to ensure these reports' consistency with EPA's environmental results directive. Rather, EPA officials must perform a manual review. A 2014 analysis of EPA's grants management processes found that EPA relied heavily on manual processes and could incorporate improvements into its new grants management database system. EPA officials said they plan to develop a web-based portal for grantees to submit documents, such as performance reports. By incorporating built-in data quality controls, such as required fields, for performance reports into its planned web-based portal, EPA could improve these reports' consistency with the environmental results directive and reduce the administrative burden of performing manual reviews. GAO is making six recommendations, including that EPA (1) reduce duplicative reporting by identifying grant programs where existing data reporting can meet EPA's performance reporting requirements and (2) incorporate data quality controls for performance reports into its planned web-based portal. In response, EPA agreed with GAO's findings, conclusions, and recommendations.
IRS believes that taxpayers are more likely to voluntarily comply with the tax laws if they believe that their return may be audited and unpaid taxes identified. In concert with audits, IRS uses other enforcement and nonenforcement methods. For example, IRS uses computers to match information returns filed by banks and other third parties with individual tax returns so it can identify unreported income. In recent years, IRS has also emphasized taxpayer education and assistance to encourage voluntary compliance. As part of its audit approach, IRS has established 10 audit classes for individual returns based on taxpayer income—5 involving returns without business income, and 5 involving returns with business income from self-employment. IRS tracks audit results by these audit classes and also by various audit sources—programs and techniques used to select potentially noncompliant returns for audit. Audit sources include, among others, suspected tax shelters, IRS and non-IRS referrals, compliance projects, and computer matches of third party information. One of the major audit sources—discriminant function (DIF)— involves returns selected solely because of a computer score designed to predict individual tax returns most likely to result in additional taxes if audited. This scoring provides an objective way to select returns and has helped IRS avoid burdening potentially compliant taxpayers with an audit. Traditionally, IRS has done two types of face-to-face audits from its district offices to review taxpayers’ books and records in support of a filed return: (1) field audits, in which an IRS revenue agent visits an individual taxpayer who has business income or a very complex return and (2) office audits, in which a tax auditor at an IRS office is visited by an individual taxpayer who has a less complex return. Tax examiners at IRS service centers also review returns and third-party information, and contact taxpayers concerning potential discrepancies on their returns. These discrepancies include such items as unreported income, as well as unallowable credits, such as the Earned Income Credit (EIC). Starting in fiscal year 1994, IRS decided to include all service center contacts by tax examiners as audits. IRS attributed this change to the fact that such contacts are part of its overall efforts to correct inaccurate returns. Unlike traditional field or office audits, these contacts usually involve a single tax issue on the return and do not involve a face-to-face audit. IRS is also counting other nontraditional types of work as audits, such as recent reviews of nonfilers done by district office auditors. After reviewing a taxpayer’s support for the return, IRS auditors decide whether to recommend changes to tax liability. If a tax change is recommended, the taxpayer has the right to either agree with the recommended tax change or to appeal it through IRS’ Office of Appeals or the courts. Depending on the outcome of such appeals, additional recommended tax revenue may or may not ultimately be assessed and collected. Our objectives were to provide information on the overall trend in IRS’ individual audit rates and on the overall results of IRS’ most recent individual audits. We did the audit rate analysis for fiscal years 1988 through 1995 because published data were readily available for this period. We did the audit results analysis for fiscal years 1992 through 1994 because this was the most recent readily available data. To determine the trend in IRS’ audit rates, we reviewed IRS’ annual reports for fiscal years 1987 through 1994 as well as unpublished data for fiscal year 1995. We collected information on the overall annual audit rates by type of taxpayer. We also collected information on the number of returns filed and the number of returns audited each year by IRS’ regions and by the district offices within those regions. We reviewed this information to identify IRS’ overall published audit rates for fiscal years 1988 through 1995, which included a combination of district office audits and service center contacts. To show what the overall rates would have been based solely on traditional district office audits, we recalculated these rates excluding service center contacts. We also used this information to calculate regional audit rates, both including and excluding service center contacts, as well as audit rates for each district office. To determine the specific results of IRS’ audit efforts for fiscal years 1992 through 1994, we analyzed IRS Audit Information Management System (AIMS) data. IRS uses AIMS to track its audits of tax returns, including the resources used and any additional taxes recommended. We obtained copies of AIMS tapes for each year from fiscal year 1992 through 1994 and did various analyses to generate overall results for each year by (1) 10 individual audit classes, (2) 15 major audit sources, (3) 3 types of audit staff, and (4) 4 broad categories of audit closures. For each of these analyses, we determined the number and percentage of audited returns as well as the total direct hours and the total additional tax recommended for each year. From this information, we computed the direct hours per return, the taxes recommended per return, and the taxes recommended per direct hour for each year. Other than reconciling totals from the AIMS database to IRS’ annual reports, we did not verify the accuracy of the AIMS data. Nor did we attempt an in-depth analysis to identify the reasons for the audit rate trends and audit results. Rather, we asked IRS Examination officials at the National Office to review our analysis of the audit rate trends and the audit results and provide explanations. We requested comments on a draft of this report from the Commissioner of Internal Revenue. On March 26, 1996, several IRS Examination Division officials, including the Acting Assistant Commissioner (Examination); Director, Management and Analysis; and, Team Leader, Management and Analysis, as well as a representative from IRS’ Office of Legislative Affairs, provided us with both oral and written comments. Their comments are summarized on page 14 and have been incorporated in this report where appropriate. We performed our audit work in Washington, D.C., between August 1995 and February 1996 in accordance with generally accepted government auditing standards. Between fiscal years 1988 and 1993, IRS’ audit rate for individuals decreased from 1.57 percent to 0.92 percent. IRS Examination Division officials told us that they attributed the decrease to more returns being filed by taxpayers; more time spent auditing complex returns by IRS auditors; and, an overall reduction in examination staffing. During fiscal years 1994 and 1995, the audit rate increased, reaching 1.67 percent by 1995. IRS officials told us they attributed this increase to the involvement of district office auditors in pursuing nonfiler cases and the increasing number of EIC claims reviewed by service center examination staff. Starting in fiscal year 1994, IRS decided to include all service center contacts by tax examiners in its audit rates. As a result, when the annual statistics for fiscal years 1993 and 1994 were published, IRS also recomputed its audit rates for fiscal years 1988 through 1992 to include all service center contacts. Counting such work as part of the audit rate, coupled with IRS recent nonfiler and EIC emphasis, tended to produce higher audit rates, as most of this work takes less time to do than traditional face-to-face audits. Figure 1 shows the trend in individual audit rates from fiscal years 1988 through 1995, based on rates that both include and exclude service center results. In analyzing the audit results from fiscal year 1992 through 1994, we did an in-depth review of various sources of IRS’ audits. Our analysis of these sources illustrated the shift from traditional audits to other types of work. Over these 3 years, four sources accounted for over half of IRS’ audits. As shown in figure 2, two of these—returns selected because of DIF or potential tax shelters—declined by at least half, while the other two—returns involving potential nonfilers or unallowable items—at least tripled. The first two sources reflect traditional audits and the latter two sources reflect nontraditional work, such as the nonfiler initiative and EIC claims, respectively. IRS’ individual audit rates varied widely by geographic location, regardless of whether service center contacts and other nontraditional audits were included. As figure 3 shows, for fiscal year 1995 the rates tended to be highest in the western regions of the country and lowest in the middle regions. IRS Examination Division officials told us that these trends were consistent with TCMP data, which showed higher taxpayer noncompliance in IRS’ Western and Southwest Regions and lower taxpayer noncompliance in its Central and Midwest Regions. With few exceptions, these regional patterns largely held true from fiscal years 1988 through 1995. (See table I.1 for our analysis of regional audit rates.) Throughout this period, audit rates also varied widely by district office. (See table I.2 for our analysis of district office audit rates.) Our analysis of the audit rates and audit results also identified patterns related to income reported by taxpayers. Our analysis focused on individuals who reported significant amounts of business income and individuals who did not report such income, (i.e., nonbusiness), particularly those that were in the lowest- and highest-income groups. Figure 4 shows that the IRS reported audit rates from fiscal years 1988 to 1995 (1) increased in the last 2 fiscal years among those in the lowest-income group (less than $25,000), particularly for business individuals, for whom the rate more than doubled, and (2) decreased among those in the highest-income group ($100,000 or more), particularly for nonbusiness individuals, for whom the fiscal year 1995 rate dropped to about one-fourth of what it had been in fiscal year 1988. IRS Examination Division officials said they attributed the increase in audit rates for the lowest-income groups, which generally occurred in fiscal years 1994 and 1995, to the nonfiler initiative and the recent emphasis on EIC. They said the decrease in audit rates for the highest-income nonbusiness individuals was due to an overall reduction in examination staffing coupled with an increase in the number of returns filed for this income group. (See table I.4 for our analysis of the audit rate trends for all income groups.) Concerning audit productivity measured by income groups, differing patterns emerged. In general, audits of the highest-income groups resulted in as much as 4 to 5 times more additional tax recommended per return—for both nonbusiness and business individuals—than did audits of the lowest-income groups. As figure 5 shows, from fiscal year 1992 to fiscal year 1994, additional taxes recommended per return (1) decreased among business individuals for the lowest-income group and increased for the highest-income group and (2) increased among nonbusiness individuals for the lowest-income group and decreased for the highest-income group. IRS Examination officials said the increases or decreases in additional taxes recommended from fiscal years 1992 to 1994 for both the highest-income business and nonbusiness individuals were affected by the small number of individual tax returns that IRS audited as part of its Coordinated Examination Program. This program is designed to audit the largest corporations; individual taxpayers audited under this program are usually corporate officers or shareholders. Another measure of audit productivity is the amount of additional taxes recommended for each direct audit hour used to complete the audit. Whereas from fiscal years 1992 to 1994, the amount of additional taxes recommended per direct hour was similar to the amount of additional taxes recommended per return for nonbusiness individuals; however, these amounts differed for business individuals. As figure 6 shows, from fiscal years 1992 to 1994, taxes recommended per direct hour (1) among business individuals increased for both the lowest- and highest-income groups and (2) increased among nonbusiness individuals for the lowest-income group and decreased for the highest-income group. (See tables II.1 through II.3 for an overall analysis of the results of audits by income classes.) We provide more detailed information from our analyses of various other elements of both the audit rates and the audit results in appendixes I and II. Such elements include audit rates by IRS district offices (tables I.2 and I.3), audit results on whether or not the IRS auditor recommended additional taxes, and if so, whether the taxpayer appealed the additional taxes recommended (tables II.10 through II.12), and the no-change rate for selected audit sources (table II.13). We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS Examination Division officials, including the Acting Assistant Commissioner (Examination); Director, Management and Analysis; and, Team Leader, Management and Analysis, as well as a representative from IRS’ Office of Legislative Affairs provided IRS’ comments in a March 26, 1996, meeting. They basically agreed with the information presented in the report and provided additional explanations for some of the audit trends and results, such as (1) the downward trend in overall audit rates, as well as the rate for the highest-income nonbusiness individuals, from fiscal years 1988 to 1993; (2) the increases or decreases in additional taxes recommended for the highest-income business and nonbusiness individuals from fiscal years 1992 to 1994; (3) the decrease in additional taxes recommended for the lowest-income business individuals from fiscal years 1992 to 1994; and (4) the amount of additional tax recommended per direct hour for the highest-income individuals compared to that for the lowest-income individuals. In response to their comments, we have incorporated the additional explanations in the report where appropriate. As agreed with you, unless you announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to various congressional committees, the Commissioner of Internal Revenue, and other interested parties. We also will make copies available to others upon request. The major contributors to this report are listed in Appendix III. If you have any questions concerning this report, please contact me at (202) 512-9044. This appendix presents our analysis of the trend in IRS’ individual audit rates from fiscal year 1988 through fiscal year 1995. The audit rate is the percentage of individual tax returns that IRS has audited of the total number of individual tax returns filed. The appendix includes a comparison of IRS’ published annual audit rates, which include both district office and service center results, with recomputed audit rates that we derived by excluding service center results. It also presents trends in individual audit rates by geographic location as well as by various income groups. Audit rate by fiscal year (continued) This appendix presents our analysis of the results of IRS’ individual audits from fiscal year 1992 through fiscal year 1994. It includes information on the number of individual returns audited; the amount of direct hours and additional taxes recommended resulting from these audits; and a computation of the direct hours per return, as well as the additional taxes recommended per return and per direct hour, for the following four categories: (1) taxpayer income groups, (2) audit sources, (3) types of audit staff, and (4) types of audit closures. Table II.7: Number of Individual Returns Audited by Audit Sources, FYs 1992 Through 1994 Note 1: See glossary for definition of terms used in this table. Note 2: Percentages are the percent of total audits for the year and have been rounded to the nearest whole percent. Note 1: See glossary for definition of terms used in this table. Note 2: Dollars rounded to the nearest whole dollar. Table II.13: Number of Audits Resulting in No Change Without Adjustment, FYs 1992 Through 1994 Note 1: See glossary for definition of terms. Note 2: Percentages are the percent of individual audits resulting in No Change Without Adjustments by the specific audit source for the year, rounded to the nearest whole percent. Returns involving an audit of an amended return in which the taxpayer has claimed a refund. Returns identified through IRS’ information gathering projects. Returns selected on the basis of a computer generated score (the scoring is based on an analysis technique known as discriminate function). Related returns from prior or subsequent years for the same taxpayer identified during the audit of a DIF-source return. Related returns identified during an audit of a DIF-source return, other than returns from prior or subsequent years. Related returns from prior or subsequent years for the same taxpayer, when the initial source was other than a DIF-source return. Audits initiated against known taxpayers who did not file a return with IRS. Over 25 other audit sources, such as referrals from other IRS Divisions, which were not one of the 10 largest sources during the period of our review. Manually selected returns for audit that do not result from other specified audit sources. Returns identified for audit due to questionable tax practitioners. Returns involving self-employment tax issues initiated by IRS service center examination staff. Returns identified through service center projects initiated by the IRS National Office. Returns identified from various state sources, generally under exchange agreements between the IRS and the states. Related returns of partners, grantors, beneficiaries, and shareholders identified during audits of either partnerships, fiduciaries, or Subchapter S corporations involving potential tax shelter issues. Total income, such as wages and interest, reported on a tax return prior to any deductions or adjustments. Returns involving refundable credits and dependency exemptions, such as the Earned Income Credit, initiated by service center examination staff. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) audits of individual taxpayers, focusing on: (1) IRS audit rates for individual returns; and (2) the overall results of IRS most recent audits of individual returns. GAO found that: (1) while the audit rate for individuals decreased between fiscal years (FY) 1988 and 1993 from 1.57 percent to .92 percent, it increased to 1.67 percent by FY 1995; (2) between FY 1992 and FY 1994, the number of audited computer-selected returns and returns with potential tax shelters declined by half, while the number of audited returns with potential nonfilers and returns with unallowable items tripled; (3) audit rates varied by region and district office; (4) between FY 1988 and FY 1995, audit rates were highest in the western region of the country and lowest in the central United States; (5) between FY 1988 and FY 1995, audit rates increased among those in the lowest-income group and decreased among those in the highest-income group; (6) audits of the highest-income group yielded the most recommended additional tax per return; and (7) between FY 1992 and 1994, additional taxes recommended for each direct audit hour increased for business individuals in the lowest and highest-income groups and nonbusiness individuals in the lowest-income group, and decreased for nonbusiness individuals in the highest-income group.
The composition of the mortgage market has changed dramatically in recent years. In the early to mid-2000s, the market segment comprising nonprime mortgages (e.g., subprime and Alt-A loans) grew rapidly and peaked in 2006, when it accounted for about 40 percent of the mortgages originated that year. Many of these mortgages had nontraditional or riskier features and were bundled by investment banks into private securities that were bought and sold by investors. The nonprime market contracted sharply in mid-2007, partly in response to increasing default and foreclosure rates for these mortgages, and many nonprime lenders subsequently went out of business. The market segments comprising mortgages backed by the enterprises and FHA had the opposite experience: a sharp decline in market share in the early to mid-2000s, followed by rapid growth beginning in 2007. For example, the enterprises’ share of the mortgage market decreased from about one-half in 2003 to about one-third in 2006. By 2009 and 2010, enterprise-backed mortgages had increased to more than 60 percent of the market. Similarly, FHA-insured mortgages grew from about 2 percent of the market in 2006 to about 20 percent in 2009 and 2010. Lenders originate mortgages through three major channels: mortgage brokers, loan correspondents, and retail lenders. Mortgage brokers are independent contractors who originate mortgages for multiple lenders that underwrite and close the loans. Loan correspondents originate, underwrite, and close mortgages for sale or transfer to other financial institutions. Retail lenders originate, underwrite, and close loans without reliance on brokers or loan correspondents. Large mortgage lenders may originate loans through one or more channels. Before originating a mortgage loan, a lender assesses the risk of making the loan through a process called underwriting, in which the lender generally examines the borrower’s credit history and capacity to pay back the mortgage and obtains a valuation of the property to be used as collateral for the loan. (See fig. 1.) Lenders need to know the property’s market value, which refers to the probable price that a property should bring in a competitive and open market, in order to provide information for assessing their potential loss exposure if the borrower defaults. Lenders also need to know the value in order to calculate the loan-to-value (LTV) ratio, which represents the proportion of the property’s value being financed by the mortgage and is an indicator of its risk level. Real estate can be valued using a number of methods, including appraisals, broker price opinions (BPO), and automated valuation models (AVM). An appraisal is an opinion of value based on market research and analysis as of a specific date. Appraisals are performed by state- licensed or -certified appraisers who are required to follow the Uniform Standards of Professional Appraisal Practice (USPAP). A BPO is an estimate of the probable selling price of a particular property prepared by a real estate broker, agent, or sales person rather than by an appraiser. BPOs can vary in format and scope, and currently there are no national standards that brokers are required to abide by in performing BPOs. An AVM is a computerized model that estimates property values using public record data, such as tax records and information kept by county recorders, multiple listing services, and other real estate records. These models use statistical techniques, such as regression analysis, to estimate the market values of properties. The enterprises and various private companies have developed a range of proprietary AVMs. Lenders have several options open to them for selecting appraisers. Lenders can select appraisers directly, using either in-house appraisers, independent appraisers, or appraisal firms that employ appraisers, or they can use AMCs that subcontract with independent appraisers. AMCs perform a number of functions for lenders, including identifying qualified appraisers in different geographic areas, assigning appraisal orders to appropriate appraisers, following up on appraisal orders, and reviewing appraisal reports for completeness and quality prior to delivering them to lenders. Appraisers consider a property’s value from three points of view—cost, income, and sales comparison—and provide an opinion of market value based upon one or more of these appraisal approaches. The cost approach is based on an estimate of the value of the land plus what it would cost to replace or reproduce the improvements (e.g., buildings, landscaping) minus physical, functional, and external depreciation. The income approach is an estimate of what a prudent investor would pay based upon the net income the property produces and is of primary importance in ascertaining the value of income-producing properties, such as rental properties. The sales comparison approach compares and contrasts the property under appraisal (subject property) with recent offerings and sales of similar properties. The scope of work for an appraisal depends on a number of factors, including the property type and the requirements of the mortgage lender or investor. For example, the lender may require the appraiser to provide an estimate of value using the income approach in addition to the sales comparison approach for a property that will be rented, or the lender may request that the appraiser provide a specific number of sales of comparable properties and properties currently listed for sale to better understand the subject property’s local market. Appraisals vary in type by the property being appraised (for example, a single-family home or condominium unit) and the level of inspection performed (exterior only or both interior and exterior). In response to losses the federal government suffered during the savings and loan crisis of the mid-1980s, Congress enacted the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA). Title XI of this statute contains provisions to ensure that certain real estate-related financial transactions have appraisals that are performed (1) in writing, in accordance with uniform professional standards, and (2) by individuals whose competency has been demonstrated and whose professional conduct is subject to effective supervision. The primary intent of the appraisal reforms contained in Title XI is to protect federal deposit insurance funds and promote safe and sound lending. Title XI also created the Appraisal Subcommittee, which is responsible for monitoring the implementation of Title XI. The subsequent regulations implementing FIRREA exempt transactions that have appraisals conforming to the enterprises’ appraisal standards or that are insured or guaranteed by a federal agency, such as FHA, the Department of Veterans Affairs (VA), and the Department of Agriculture (USDA). The enterprises, whose activities are overseen by FHFA, include appraisal requirements in the guides they have developed for lenders that sell mortgage loans to them. These guides identify the responsibilities of lenders in obtaining appraisals and selecting appraisers, specify the required documentation and forms for different appraisal types (including different levels of inspection), and detail the appraisal review processes lenders must follow. In addition, the enterprises issued appraiser independence requirements in 2010 that replaced HVCC. FHA uses appraisals to determine a property’s eligibility for mortgage insurance. FHA’s appraisal requirements are outlined in a handbook on valuations and in periodic letters to approved lenders (called mortgagee letters). FHA requires appraisals to include inspections to assess whether the property complies with FHA’s minimum property requirements and standards. Appraisers must be state-certified and must have applied to FHA to be placed on FHA’s appraiser roster in order to perform appraisals for FHA-insured loans. Lenders select an appraiser from the FHA roster. VA and USDA have loan guaranty programs, and USDA also has a direct loan program, with their own appraisal requirements and processes. VA’s appraisal process is different from those of FHA and USDA in that VA assigns an appraiser from its own panel of approved appraisers and has established a fee schedule that sets a maximum fee that can be charged to the borrower. USDA does not have a roster of appraisers or many requirements beyond that lenders must use properly licensed or certified appraisers. For mortgages originated by federally regulated institutions but not sold to the enterprises or insured or guaranteed by a federal agency, Title XI of FIRREA places responsibility for regulating appraisals and “evaluations” with the federal banking regulatory agencies. Federal banking regulators have responsibility for ensuring the safety and soundness of the institutions they oversee, protecting federal deposit insurance funds, promoting stability in the financial markets, and enforcing compliance with applicable consumer protection laws. To achieve these goals, the regulators conduct on-site examinations to assess the financial condition of the institutions and monitor their compliance with applicable banking laws, regulations, and agency guidance. These agencies are OCC, which oversees federally chartered banks; OTS, which oversees savings associations (including mortgage operating subsidiaries); NCUA, which charters and supervises federal credit unions; the Federal Reserve, which oversees insured state-chartered member banks; and FDIC, which oversees insured state-chartered banks that are not members of the Federal Reserve System. Both the Federal Reserve and FDIC share oversight with the state regulatory authority that chartered the bank. The Federal Reserve also has general authority over lenders that may be owned by federally regulated holding companies but are not federally insured depository institutions. As required by Title XI, federal banking regulators have established appraisal and evaluation requirements through regulations and have also jointly issued Interagency Appraisal and Evaluation Guidelines. These regulations and guidelines address the minimum appraisal and evaluation standards lenders must follow when valuing property and specify the types of policies and procedures lenders should have in place to help ensure independence and credibility in the valuation process. Among other things, lenders are required to have risk-focused processes for determining the level of review for appraisals and evaluations, reporting lines for collateral valuation staff that are independent from the loan production function, and internal controls to monitor any third-party valuation providers. The federal banking regulators have procedures for examining the real estate lending activities of regulated institutions that include steps for assessing the completeness, adequacy, and appropriateness of these institutions’ appraisal and evaluation policies and procedures. Other laws that apply to appraisals for residential mortgages include consumer protection statutes, such as the Truth in Lending Act (TILA), which addresses disclosure requirements for consumer credit transactions and regulates certain lending practices; the Equal Credit Opportunity Act (ECOA), which addresses non-discrimination in lending; and the Real Estate Settlement Procedures Act of 1974 (RESPA), which requires transparency in mortgage closing documents. Regulations implementing TILA and ECOA are issued by the Federal Reserve and enforced by the federal banking regulators. RESPA regulations are issued by HUD and enforced by HUD and the federal banking regulators. Under the Dodd-Frank Act, most rulemaking authority and some implementation and enforcement responsibilities for these laws will be transferred to the Bureau of Consumer Financial Protection to be established in the Federal Reserve System. Available data, lenders, and mortgage industry participants we spoke with indicate that appraisals are the most frequently used valuation method for home purchase and refinance mortgages. To determine the use of valuation methods in mortgage originations, we requested data from the enterprises and the five lenders with the largest dollar volume of mortgage originations in 2010. The enterprises provided us with data on the minimum valuation method and, when applicable, the level of appraisal inspection they required for the mortgages they purchased from 2006 through 2010 that were underwritten using their automated underwriting systems. (Because these are minimum requirements, lenders can and sometimes do exceed them.) The lenders provided us with data on the actual valuation method and appraisal inspection level for mortgages they made during the same period, although they did not always have information for the earlier years or for mortgages originated through their broker and correspondent lending channels. Because the enterprise and lender data were more complete for recent years, the following discussion provides more detail on 2009 and 2010, a period in which mortgages backed by the enterprises (along with FHA) dominated the market. Data for the two enterprises combined show that, for first-lien residential mortgages, the enterprises required appraisals for  94 percent of mortgages they bought in 2009, including 92 percent of purchase mortgages and 94 percent of refinance mortgages; and  85 percent of mortgages they bought in 2010, including 86 percent of purchase mortgages and 84 percent of refinance mortgages. For the remaining mortgages processed through their automated underwriting systems, the enterprises did not require an appraisal because their underwriting analysis indicated that the default risk of the mortgages was sufficiently low to instead require validation of the sales prices (or loan amounts in the case of refinances) by an AVM-generated estimate of value. In both 2009 and 2010, the enterprises required interior and exterior inspections for roughly 85 percent of the appraisals for purchase mortgages and roughly 92 percent of the appraisals for refinance mortgages. The remaining appraisals required exterior inspections only. Available enterprise data for the preceding 3 years showed that appraisals were required for almost 90 percent of mortgages (purchase and refinance transactions combined), and the percentage of appraisals requiring both interior and exterior inspections increased from approximately 80 percent to 86 percent, although the data covered a smaller proportion of the enterprises’ total mortgage purchases. We also aggregated data from five lenders, which include not only mortgages sold to the enterprises, but also mortgages insured by FHA, guaranteed by VA or USDA, held in the lenders’ portfolios, or placed in private securitizations. These data show that, for the first-lien residential mortgages for which data were available, these lenders obtained appraisals for  88 percent of the mortgages they made in 2009, including 98 percent of purchase mortgages and 84 percent of refinance mortgages; and  91 percent of the mortgages they made in 2010, including 98 percent of purchase mortgages and 88 percent of refinance mortgages. For mortgages for which an appraisal was not done, the lenders we spoke with reported that they generally relied on validation of the sales price against an AVM-generated value, in accordance with enterprise policies that permit this practice for some mortgages with characteristics associated with a lower default risk. For both 2009 and 2010, the lenders reported that interior and exterior inspections of the subject property were conducted for over 99 percent of the appraisals for purchase mortgages and about 97 percent of the appraisals for refinance mortgages. The remainder involved exterior inspections only. Although data for the preceding 3 years were less complete, they showed roughly similar percentages to those for mortgages made in 2009 and 2010. The higher percentages reported by the lenders compared with those from the enterprises in 2010 may partly reflect lender valuation policies that exceed enterprise requirements in some situations. For example, officials from some lenders told us their own risk-management policies may require them to obtain an appraisal even when the enterprises do not, or they may obtain an appraisal to better ensure that the mortgage complies with requirements for sale to either of the enterprises. Additionally, FHA requires appraisals with interior and exterior inspections for all of the purchase mortgages and most of the refinance mortgages it insures, and most of the lenders we contacted make substantial numbers of these mortgages. The enterprises have efforts under way to collect more complete proprietary data on the use of different valuation methods. In order to obtain consistent appraisal and loan data for all mortgages they purchase from lenders, the enterprises are currently undertaking a joint effort, under the direction of FHFA, called the Uniform Mortgage Data Program (UMDP). UMDP has two components related to appraisals. The first component is scheduled to begin September 2011, when appraisers will be required to use new standardized response options in completing appraisal report forms. The second component will be a Web-based portal that will facilitate the delivery of standardized appraisal data to the enterprises, and the enterprises are planning to fully implement UMDP by March 2012. According to officials from the enterprises, UMDP will produce a proprietary dataset that will allow the enterprises to work with lenders to resolve any concerns regarding appraisal quality prior to purchasing mortgages. Additionally, officials told us that the dataset would also allow them to assess the impact of their valuation policies on appraisal quality and mortgage risk. However, some appraisal industry stakeholders have expressed concerns that in some circumstances the standardized response options may be too limited to clearly and accurately communicate information that is material to the appraisal. The enterprises, FHA, and lenders require and obtain appraisals for most mortgages because appraising is considered by mortgage industry participants to be the most credible and reliable valuation method. According to mortgage industry participants, appraisals have certain advantages that set them apart from other valuation methods. Most notably, appraisals and appraisers are subject to specific requirements and standards. The minimum standards for appraisals included in USPAP cover both the steps appraisers must take in developing appraisals and the information the appraisal report must contain. USPAP also requires that appraisers follow standards for ethical conduct and have the competence needed for a particular assignment. For example, the appraiser must be familiar with the specific type of property, the local market, and geographic area. Furthermore, state licensing and certification requirements for appraisers include minimum education and experience criteria and call for successfully completing a state- administered examination. Also, standardized report forms, including those developed by the enterprises, provide a way to report relevant appraisal information in a consistent format. However, some of these potential advantages depend on effective oversight, and we have previously reported on weaknesses in oversight of the appraisal industry. For example, in a 2003 report we noted that many state appraiser regulatory agencies cited resource limitations as an impediment to carrying out their oversight responsibilities. In addition, as previously discussed, some appraisal industry participants have reported that some lenders and mortgage brokers have pressured appraisers to inflate property values in violation of appraiser independence standards. Even in the absence of overt pressure, biased appraisal values may result from the conflict of interest that arises where the appraiser is selected, retained, or compensated by a person with an interest in the outcome or dollar amount of the loan transaction. In contrast with appraisals, BPOs do not have standard requirements and are generally not considered a credible valuation method for mortgage originations. According to some mortgage industry participants, a key disadvantage of BPOs is that real estate brokers and agents who perform them are not required to obtain training or professional credentials in property valuation, and the BPO industry lacks uniform standards. At least one industry group has developed standards of practice for BPOs, which are reportedly used by some BPO providers, but adherence to these standards is voluntary. Similarly, the industry has not adopted standardized BPO forms, resulting in differences in the content and quality of BPO reports, according to some mortgage industry participants. Additionally, BPOs provide somewhat different information than appraisals—a sales price or listing price rather than the property’s market value. The enterprises do not permit lenders to use BPOs for mortgage originations, and guidelines from federal banking regulators state that BPOs do not meet the standards for an evaluation and cannot be used as the primary basis for determining property values for mortgages originated by regulated institutions. Lenders and other mortgage industry participants we spoke with identified advantages to BPOs that make them useful for property valuations in situations other than first-lien purchase or refinance mortgage transactions, such as monitoring the collateral in their existing loan portfolios and developing loss-mitigation strategies for distressed properties. In these circumstances, some mortgage industry participants told us that leveraging real estate brokers’ knowledge of local sales and listings is an advantage because it helps them determine probable selling prices. BPOs can be also performed cheaper and faster than appraisals, which allows lenders to obtain more of them and make decisions more quickly, particularly when dealing with distressed properties. Lenders and AMCs we spoke with estimated that BPOs cost from $65 to $125 and are generally completed in 3 to 5 days, while appraisals can cost more than twice as much and take several days longer to complete. AVMs are generally not used as the primary source of information on property value for first-lien mortgage originations, due in part to potential limitations with the quality and completeness of the data AVMs use. Data sources for AVMs include public records, such as tax records and information kept by county recorders, and multiple listing services. Assessed values for property tax purposes are not always current and are themselves often generated from statistical models. Information on property sales kept by county recorders is not necessarily complete or consistent because disclosure and data collection methods can vary by county. Similarly, data from multiple listing services can be fragmented and inconsistent, in part because real estate professionals enter the data themselves, which may result in incomplete or inaccurate data. Incomplete data for a particular geographic area will prevent an AVM from producing reliable values for properties in those areas. Lenders have to regularly monitor the accuracy and coverage of multiple AVMs to determine which ones should be used for properties in various locations. Some mortgage industry participants also told us that AVMs tend to be less reliable in areas where properties are not homogeneous—for example, a neighborhood with houses built at very different times and on different sized lots (in contrast with a suburban subdivision, which may have houses built at the same time and in the same style). In addition, AVMs may not include information on property conditions; rather, they may assume that all properties are in average condition. While the enterprises permit lenders to use AVMs for some mortgage originations (as discussed earlier), guidelines from federal banking regulators state that AVMs generally do not meet the standards for an evaluation and cannot be used as the sole basis for determining property values for mortgages originated by regulated institutions. Despite these disadvantages, AVMs provide a fast, inexpensive means of indicating the value of properties in active markets, and the enterprises and lenders make use of AVMs for a number of purposes. In addition to their use in a small percentage of mortgage originations, representatives from the enterprises and some lenders and AMCs told us they use values generated by AVMs as part of their quality control processes. They said that when the appraised value varies significantly from the value generated by an AVM, they conduct additional analysis to examine the quality of the appraisal. Similar to BPOs, AVMs may also be used to monitor collateral values in lenders’ existing loan portfolios. Furthermore, in circumstances where AVMs are appropriate, they offer a number of advantages over appraisals. AVMs are generally much quicker and cheaper than appraisals, requiring only a few seconds to generate an estimate and costing between $5 and $25, according to mortgage industry participants we spoke with. Also, proponents of AVMs argue that this technology delivers more objective and consistent appraisal values than human appraisers, who may value properties differently and may be subject to conflicts of interest or pressure from lenders to assess a property at a specific value, as discussed later in this report. USPAP requires appraisers to consider which approaches to value—such as sales comparison, cost, and income—are applicable and necessary to perform a credible appraisal of a particular property. Appraisers must then reconcile values produced by the different approaches they use to reach a value conclusion. The enterprises and FHA require that appraisals provide an estimate of market value at a point in time and reflect prevailing economic and housing market conditions. They require that, at a minimum, appraisers use the sales comparison approach for all appraisals because it is considered most applicable for estimating market value in typical mortgage transactions. They also require appraisers to use the cost approach for manufactured homes but do not require the income approach for one-unit properties unless the appraiser deems it necessary. Consistent with these policies, valuation data we obtained from FNC suggest that appraisers use the sales comparison approach in a large majority of mortgage transactions, while the cost approach is used less often—generally in conjunction with the sales comparison approach—and the income approach is rarely used. FNC captures data on appraisals conducted for a number of major lenders; FNC’s data represent approximately 20 percent of mortgage originations in 2010. FNC’s data for both purchase and refinance transactions show the following:  Nearly 100 percent of appraisals from 2010 used the sales comparison approach. The percentage was the same for 2009 appraisals.  Sixty-six percent of appraisals from 2010 used the cost approach, generally in combination with the sales comparison approach, similar to 65 percent for 2009 appraisals.  Five percent of appraisals from 2010 used the income approach, virtually always in combination with one or both of the other approaches. The corresponding percentage for 2009 appraisals was 4 percent. These percentages were roughly similar for purchase and refinance mortgages. In addition, although FNC’s data for the preceding 3 years covered a smaller proportion of total mortgages, the percentages for purchase and refinance transactions combined were generally comparable to those described above. Because the sales comparison approach involves an analysis of recent sales of similar properties, it is generally viewed as the most appropriate way to estimate market value in active residential markets, according to industry guidance and research literature. When appraisers use the sales comparison approach, they find recent sales of comparable properties and make adjustments to the selling prices of those properties based on any differences between them and the subject property to estimate market value. In selecting comparable properties, appraisers often consider locational attributes (including, but not limited to, distance from the subject property), which may be critical to a property’s value. Properties used for comparison should also have been sold relatively recently to reflect the current market. However, one criticism of the sales comparison approach is that it may perpetuate price trends in overheated (or depressed) markets. For example, the use of comparable sales with inflated sales prices (driven up by factors that increase consumer demand, such as expanded credit availability) can lead to progressively higher market valuations for other properties, which in turn become comparables for future sales transactions. Also, in markets where there are few recent sales of comparable properties, there may be insufficient information to support a credible estimate of value. The second approach to value—the cost approach—is mostly used in addition to the sales comparison approach, and in specific circumstances, such as valuing newly constructed properties or manufactured homes, according to federal officials and appraisal industry participants. To implement the cost approach, appraisers must estimate how much it would cost to build a new or substitute property in place of the subject property. In addition, they must value other site improvements and the land and consider accrued depreciation. According to some appraisal industry participants, some circumstances in which the cost approach can be particularly useful exist more often in rural areas. These circumstances include properties with unusual features, such as additional structures or larger (or smaller) lots than those of otherwise comparable properties. Using the cost approach can provide additional information to appraisers to account for these unusual features. Further, the cost approach can be important in estimating the value of newly constructed homes because cost and market value are usually more closely related when properties are new (unless there are economic or functional factors that impact value). However, the cost approach also has a number of disadvantages. As a property ages, estimating the appropriate amount of depreciation becomes more difficult and adds uncertainty to the estimate of value. Additionally, while a common way to estimate land values is to review recent sales of vacant lots close to the subject property, such sales may be rare in many mature residential areas. The cost approach also may not be appropriate for appraising certain types of properties, such as high-rise condominium units, which are not built individually but rather as part of a larger complex, and historic properties, which have value not fully captured by the cost approach. The third approach to value used in appraisals is the income approach, which is an estimate of what a prudent investor would pay based upon a property’s expected net income (such as from rent). For residential properties, the income approach is considered most useful when there is an active rental market for comparable properties. However, in some residential areas, rental properties are relatively rare, resulting in limited data on which to base an estimate using the income approach. Even when data on rents are available, they may not be equivalent. For example, some rent amounts may include the cost of utilities or other amenities, while others may not. In addition, some lenders told us that the income approach is often not applicable when the intended use of the subject property is as an owner-occupied home rather than as an income- producing property. Some mortgage industry stakeholders have argued that wider use of other approaches—particularly the cost approach—could help mitigate what they view as a limitation of the sales comparison approach. They told us that reliance on the sales comparison approach alone can lead to unsustainable market values and that using the cost approach as a check on the sales comparison approach could help lenders and appraisers identify when this is happening. For example, they pointed to a growing gap between the average market values and average replacement costs of properties as the housing bubble developed in the early to mid-2000s. However, the industry data discussed previously suggest that the cost approach was used in a substantial proportion of mortgage originations in recent years. In addition, other mortgage industry participants noted that a rigorous application of the cost approach may not generate values much different from values generated using the sales comparison approach. They indicated, for example, that components of the cost approach—such as land value or profit margins of real estate developers—can grow rapidly in housing markets where sales prices are increasing. Additional information would be needed to assess any differences between the values appraisers generated using the different approaches. Although the available data on appraisal approaches did not include this information, enterprise officials told us that the UMDP initiative will capture data on appraisal approaches and values generated by these approaches, which may help them perform more in-depth analysis of appraisals for the mortgages they purchase. However, given uncertainty regarding the future role of the enterprises in the mortgage market and the proprietary nature of the effort, the degree to which data from the UMDP initiative will benefit the broader market is unclear. FHFA officials told us that UMDP could be a potentially important risk management tool for the enterprises and that they have not made decisions about whether they will make any of the data collected through the program available for wider use. Lenders generally require consumers to pay for costs associated with obtaining appraisals, which can include fees paid to appraisers and appraisal firms for providing the appraisal and fees charged by AMCs that lenders often use to administer the appraisal process. Mortgage and appraisal industry participants we spoke with estimated that, for a conventional mortgage, consumers pay an average of $300 to $450 for a typical appraisal with an interior and exterior inspection, depending on where the property is located. Appraisals for properties in high cost-of- living areas and rural areas tend to be more expensive than in low cost- of-living areas and urban areas, according to mortgage industry participants and available documentation. Some of these differences are evident—for example, in the VA’s appraiser fee schedule, which shows variation in fees by state ranging from a low of $325 in Kentucky to a high of $625 in Alaska. Industry fee information published in February 2010 by a real estate technology company shows similar state-level variation, with median fees ranging from $300 to $600. According to this company’s data, appraisal fees also vary substantially within states, sometimes by more than $200. Other factors that affect appraisal costs include the type of appraisal product (e.g., level of inspection, scope of work) and the size and complexity of the property, according to appraisers, lenders, and AMCs we spoke with. For example, one lender said an appraisal with an exterior-only inspection for a conventional mortgage may cost $100 to $150 less than an appraisal that also has an interior inspection. Others told us that an appraisal for an FHA-insured mortgage, which has additional inspection requirements, might cost $75 more than an appraisal for a conventional mortgage. Complex properties may require specialized experience or training on the part of the appraiser and may require the appraiser to take more time to gather and analyze data to produce a credible appraisal. A complex property may have unique characteristics that are more difficult to value, such as being much larger than nearby properties or being a lakefront or oceanfront property, because there are likely few properties with comparable features that have recently been sold. As a result, appraisal costs are often higher for these properties and would be passed on to the consumer. In addition, the extent to which data on comparable sales are readily available and the number of comparables required by the lender may affect appraisal costs. Appraisers, lenders, and AMCs we spoke with told us that, in general, neither the number of appraisal approaches (i.e., sales comparison, cost, and income) used by an appraiser nor a lender’s use of an AMC affect consumer costs for an appraisal. USPAP requires appraisers to use as many of the three approaches as are applicable for each property. While using multiple approaches requires additional time and effort on the part of the appraiser, appraisers typically do not adjust their fees on this basis, according to appraisers we spoke with. Instead, to the extent they are able to set their fees, they will do so at a level that will cover their total time and effort across all their assignments, including those requiring multiple approaches. Similarly, mortgage industry participants we spoke with told us that the amount a consumer pays for an appraisal is generally not affected by whether the lender uses an AMC or engages an appraiser directly. Rather, they said that AMCs typically charge lenders about the same amount that independent fee appraisers would charge lenders when working with them directly, and lenders generally pass on the entire cost to consumers. Appraisers have reported receiving lower fees when working with AMCs compared to when working directly with lenders because AMCs keep a portion of the total fee. Appraisal industry participants told us that the AMC portion is at least 30 percent of the fee the consumer pays for an appraisal. For example, one AMC official told us that, for a $375 appraisal, the appraiser would receive approximately $250, and the AMC would keep $125, $100 of which would cover its costs and $25 of which would be pretax profit. According to lenders and AMCs we spoke with, consumer costs for appraisals increased slightly in 2009, as a result of the enterprises requiring appraisers to complete an additional form, called the market conditions addendum. This form prompts appraisers to report on market conditions and trends in the subject property’s neighborhood, including housing supply, sales price and listing price trends, seller concessions, and foreclosure sales. Lenders and AMCs we spoke with estimated that having appraisers complete the market conditions addendum added between $15 and $45 to the cost of an appraisal. VA also adopted this form and added $50 to the fees on its fee schedule. In general, however, lenders, AMC officials, appraisers, and other industry participants noted that consumer costs for appraisals have remained relatively stable in the past several years and pointed to several factors that could explain this stability. First, a number of those we spoke with said that increased use of technology and greater availability of data electronically has allowed appraisers to complete some of their required tasks more quickly. Second, the supply of appraisers relative to the demand for their services has helped to hold consumer costs steady. Some lender and AMC officials said that there is an oversupply of appraisers in some markets where fewer mortgage loans are being originated, which has put downward pressure on appraisers’ fees. Third, AMCs compete with each other for lenders’ business, which keeps costs relatively stable. A provision in the Act that requires lenders to pay appraisers a “customary and reasonable fee” may affect consumer costs for appraisals, depending on interpretation and implementation of federal rules. The Federal Reserve issued rules in October 2010 outlining two “presumptions of compliance” for lenders and their agents, such as AMCs, to demonstrate they are meeting the Act’s requirements. Compliance with these rules became mandatory on April 1, 2011. Under the rules, lenders and AMCs are presumed to be in compliance with customary and reasonable fee requirements if they pay appraisers an amount reasonably related to recent rates of compensation for comparable appraisal services performed in a given geographical market and make adjustments for the specific circumstances of each assignment (including the type of property, scope of work, and appraiser qualifications). Alternatively, lenders and AMCs are presumed to comply with these rules if they set fees by relying on objective third-party information, such as fee schedules, studies, and surveys prepared by independent third parties, including government agencies, academic institutions, and private research firms. According to the Act, these third- party studies cannot include fees paid to appraisers by AMCs. However, a person may rebut either presumption with evidence that the fee for a given transaction is not customary and reasonable based on other information. The effect of this change on consumer costs may depend on the approach lenders and AMCs take in complying. Some lenders and AMCs told us that, under the first presumption of compliance, they believe they can continue to compensate appraisers at the rates they have been paying them for recent assignments, relying in part on internal data from the previous 12 months as evidence that those fees are customary and reasonable. Assuming they were able to meet the conditions for this presumption of compliance, consumer costs likely would not change, according to representatives of these companies. However, other lenders are taking steps to meet the requirement under the second presumption of compliance. Some mortgage industry participants told us that some lenders, including smaller ones, may set appraiser fees at the level outlined in the VA appraiser fee schedule, which uses information from periodic surveys of lenders to set maximum fees that borrowers can be charged in each state. Other lenders and industry groups are having fee studies done in order to comply. Because these studies cannot include the fees AMCs pay to appraisers, some industry participants, including some AMC officials, expect them to demonstrate that appraiser fees should be higher than what AMCs are currently paying. If that is the case, these lenders would require AMCs to increase the fees they pay to appraisers to a rate consistent with the findings of those studies. The expected result would be an increase in appraisal costs for consumers, as well as potential improvements in appraisal quality. However, some lenders are evaluating the possibility of no longer using AMCs and managing their own panels of appraisers, which would eliminate the AMC administration fee from the appraisal fee that consumers pay. Some regulatory officials and lenders told us that lenders can still recover the cost of managing the appraisal process from the consumer in other ways—for example, through higher application fees, origination fees, or interest rates. FHA instituted a policy requiring lenders to pay reasonable and customary fees to appraisers in 1997. Initially, this policy required that lenders charge consumers only the actual amount paid to the appraiser but was changed several months later to allow lenders to have consumers pay costs associated with services provided by AMCs, as well as the fee paid to the appraiser. FHA limited the total costs to consumers to the amount that was customary and reasonable for an appraisal in the market area in which the appraisal was performed. In 2009, FHA released additional guidance on fee requirements, stating that appraisers must be compensated at a rate that is customary and reasonable for an appraisal performed in the market area of the property and that AMC fees must not exceed what is customary and reasonable for the appraisal management services they provide. FHA’s guidance places responsibility with the lender for knowing what is customary and reasonable in the areas in which they lend and advises appraisers not to accept assignments for which they believe the fees are not reasonable. FHA officials told us they did not know whether or how this change had affected consumer costs. RESPA requires that lenders disclose estimated appraisal costs to the consumer along with estimates of other services that are required in order to close the mortgage loan. These estimates, which are included on a standard good faith estimate form, must be provided within 3 days of receiving the consumer’s application for a mortgage loan, unless the lender turns down the application or the consumer withdraws the application. Appraisals typically fall in the category of third-party settlement services required and selected by the lender. In the estimate provided to the consumer, the lender must identify each third-party settlement service required, along with the estimated price to be paid by the consumer to the provider of each service. Subsequently, at loan closing, the lender must disclose the actual costs for these services on the HUD-1 settlement form. Changes to RESPA that took effect in 2010 require that actual costs paid by consumers for third-party settlement services not exceed estimated costs by more than 10 percent. If actual costs are higher than this threshold, the lender is responsible for making up the difference, providing lenders with a greater incentive to estimate costs accurately. For each service, the lender is to disclose the name of the third-party service provider and the amount they were paid. For example, according to HUD guidance, when a lender uses an AMC to engage an appraiser, the lender is required to disclose the name of the AMC and the total amount paid to the AMC (but not how much the AMC paid the appraiser). When a lender engages an appraiser directly, the lender must disclose the name of the appraiser and how much the appraiser was paid. The Act permits, but does not require, lenders to disclose to the consumer separately the fee paid to the appraiser by an AMC and the administration fee charged by the AMC at closing. Some appraisers and federal and state regulatory officials said requiring separate disclosures of AMC fees and appraiser fees would benefit consumers by providing greater transparency. However, other federal officials and lenders questioned the value of separate disclosures for various reasons: the information could be confusing to consumers, would come too late to inform consumer decision making if provided at closing, and involves a small part of total closing costs. Regulations implementing ECOA require lenders to notify consumers of their right to receive the valuation report associated with a mortgage transaction and to provide it upon request. Alternatively, lenders can routinely provide consumers with a copy of the report during the mortgage origination process. The Act amended ECOA to require lenders to provide consumers with a copy of the valuation report no later than 3 days prior to loan closing for first-lien mortgages secured by the consumer’s principal dwelling and for all types of valuations, including appraisals, BPOs, and AVMs. In 2009, the enterprises had adopted a similar requirement as part of HVCC for appraisal reports associated with mortgages to be sold to the enterprises. These policy changes enhance disclosures to consumers by guaranteeing they receive information about the value of the property prior to completing their mortgage transaction. Recently issued policies reinforce long-standing requirements and guidance addressing conflicts of interest that may arise when parties have an incentive to unduly influence or pressure appraisers to provide biased values. Conflicts of interest arise when direct or indirect personal interests bias appraisers from exercising their independent professional judgment. These conflicts can arise in several ways. Loan production staff and mortgage brokers are often compensated on a commission based upon mortgage originations, which may give them an incentive to pressure appraisers to provide values that will allow loans to close. When lenders order appraisals from an AMC they own or are affiliated with, the lender’s loan production staff may be able to influence AMC staff to pressure appraisers, according to some mortgage industry stakeholders. Companies that provide both valuation services and title services for the same transaction may also have a potential conflict of interest because the company stands to profit if the mortgage is approved and the borrower subsequently purchases the company’s title insurance at closing. Real estate agents earn commissions based on a property’s sales price, which may give agents an incentive to influence an appraiser’s opinion of value. Borrowers may also want to influence appraisers to provide a value that will allow their loans to be approved. Some appraisers may acquiesce to these different sources of pressure because they want to satisfy their clients, receive future assignments, or do not want to be responsible for stopping the property transaction from going through. In order to keep appraisers independent and prevent them from being pressured, the federal banking regulators, enterprises, FHA, and other agencies have regulations and policies governing the selection of, communications with, and coercion of appraisers. Examples of recently issued policies that address appraiser independence include HVCC, which took effect in May 2009; the enterprises’ new appraiser independence requirements that replaced HVCC in October 2010; and revised Interagency Appraisal and Evaluation Guidelines from the federal banking regulators, which were issued in December 2010 and apply to federally regulated financial institutions. Additionally, the Act broadly prohibits conflicts of interest in the valuation process for all consumer credit transactions secured by a consumer’s principal dwelling. Provisions of these and other policies address some or all of the following issues:  Prohibitions against loan production staff involvement in appraiser selection and supervision. Loan production staff are prohibited from selecting, retaining, recommending, or influencing the selection of an appraiser for a specific assignment. The reporting structure for appraisers must also be independent of the loan production function. A version of these requirements has been included in the federal banking regulators’ appraisal regulations since 1990 and in FHA guidance since 1994. Similar prohibitions were included in HVCC for loans sold to the enterprises and remain in effect in the enterprises’ current appraiser independence requirements. For VA-guaranteed loans, VA assigns appraisers on a rotational basis on behalf of lenders, removing loan production staff and mortgage brokers from the process altogether.  Prohibitions against third parties selecting appraisers. Appraisers should be selected by the lender or its agent rather than by a third party with an interest in the mortgage transaction. The federal banking regulators include this requirement in their appraisal regulations. In addition, the enterprises expressly prohibit borrowers from selecting and retaining appraisers. The enterprises and FHA also prohibit real estate agents and mortgage brokers from selecting appraisers.  Limits on communications with appraisers. While certain communications between loan production staff and appraisers are necessary, other communications that may unduly influence appraisers are inappropriate. For example, according to the federal banking regulators’ guidelines, this includes communicating a predetermined, expected, or qualifying estimate of value or a loan amount, or a target LTV ratio, to an appraiser. Similarly, the enterprises and FHA prohibit loan production staff from communicating with appraisers or AMCs about anything that relates to or impacts valuation. All of these requirements and guidelines permit lenders to request that an appraiser (1) consider additional property information, including additional comparable properties; (2) provide further detail, substantiation, or explanation of the value conclusion; or (3) correct errors in the appraisal report. VA permits lenders’ staff to communicate with appraisers about the timeliness of an appraisal report, but only VA-approved appraisal reviewers may discuss valuation matters with the appraiser.  Prohibitions against coercive behaviors. Coercive behavior is intended to influence appraisers to base property value on factors other than the person’s independent judgment. The federal banking regulators’ guidelines state that no lender or person acting on a lender’s behalf should engage in coercive actions, and the enterprises and FHA expressly prohibit such actions. Examples of coercive actions include withholding timely payment or partial payment for an appraisal report; expressly or implicitly promising future business, promotions, or increased compensation to an appraiser; and implying to an appraiser that his or her current or future retention depends on the valuation estimate. Although industry-wide data on lenders’ use of AMCs over time are unavailable, appraisal industry participants told us that between 60 and 80 percent of appraisals are currently ordered through AMCs, compared with less than half before HVCC went into effect in 2009. According to these participants, this increased demand for AMCs’ services has resulted in a proliferation of new AMCs across the country. Lenders and other mortgage industry participants identified several factors that have contributed to a greater use of AMCs. First, market conditions, including an increase in the number of mortgages originated during the mid-2000s, put pressure on lenders’ capacity to manage appraiser panels. Second, as lenders expanded the areas in which they originated mortgages, they found identifying appraisers with the appropriate experience and familiarity with the various locations to be increasingly burdensome. They also said it would be difficult to predict where across the country they would need appraisers at any given time. AMCs provided a practical solution to these two issues. According to a number of lenders we spoke with, AMCs can manage the valuation process and costs more efficiently than their internal valuation departments. In particular, they told us that AMCs are better equipped to handle the administrative effort of managing appraiser panels, such as checking licenses, maintaining contact information, placing and following up on appraisal orders, performing initial quality control, and providing national geographic coverage. In several of these cases, the lenders had already switched to using AMCs years before HVCC went into effect. The third factor that affected some lenders’ use of AMCs was that HVCC required additional layers of separation between loan production staff and appraisers. According to some appraisal industry participants, some lenders may have outsourced appraisal functions to AMCs because they thought using AMCs allowed them to easily demonstrate compliance with the appraiser selection provisions in HVCC. Several appraisal industry participants told us that some lenders incorrectly believed they were required to use AMCs in order to be in compliance with HVCC. Some appraisers, mortgage brokers, and lenders told us that the increased use of AMCs and the policy changes that banned mortgage brokers from selecting appraisers disrupted the business relationships they relied on and changed the ways they operate. Some of these industry participants told us small appraisal firms went out of business as lenders increased their reliance on AMCs. Having lost their lender and mortgage broker clients, some appraisers said they joined AMC panels to be able to make a living as appraisers but found they were asked to perform the same amount of work for less money than they had been making previously. Some appraisers also indicated that some AMCs pressure appraisers to complete appraisal reports within unreasonable time frames or try to guide the appraiser’s value conclusion—for example, by recommending the use of certain comparable sales. Other appraisal industry participants told us that some experienced appraisers decided to perform nonresidential appraisals or left the appraiser profession altogether instead of working for lower fees. In addition, several lenders told us they required mortgage brokers to use only designated AMCs—a change that eliminated the brokers’ ability to communicate with appraisers. Some mortgage industry participants, including mortgage brokers, also said that the lack of communication with appraisers caused delays in receiving appraisals because the brokers had to go through AMCs to correct reports or have questions answered. In addition, mortgage brokers we spoke with told us that it may be difficult to transfer appraisals to another lender if a deal falls through because lenders often do not accept appraisals that were not from their designated AMCs. In these instances, a second appraisal would need to be ordered, but at the borrower’s or mortgage broker’s expense. Although reliance on AMCs has increased, direct federal oversight of AMCs is limited. Federal banking regulators’ guidelines for lenders’ own appraisal functions list standards for appraiser selection, appraisal review, and reviewer qualifications. For example, a lender’s criteria for selecting appraisers should identify appraisers who possess the requisite education, expertise, and experience to competently complete the assignment. In addition, a lender’s appraisal review policies and procedures should, among other things, establish a process for resolving deficiencies in appraisals and set forth documentation standards for the review. Similarly, the guidelines state that a lender should establish qualification criteria for appraisal reviewers that take into consideration education, experience, and competence. The guidelines also require lenders to establish processes to help ensure these standards are met when lenders outsource appraisal functions to third parties, such as AMCs. Officials from the federal banking regulators told us they review lenders’ policies and controls for overseeing AMCs, including the due diligence they perform when selecting AMCs, performance expectations outlined in contracts, and processes for assessing appraisal quality. However, they told us they generally do not review an AMC’s operations directly unless they have serious concerns about the AMC, and the lender is unable to address those concerns. Similarly, the enterprises review lenders’ policies and controls but not those of AMCs because lenders are responsible for ensuring that AMCs meet the enterprises’ requirements. Officials from the enterprises said they do not review AMCs directly because they do not have business relationships with AMCs. In light of the growing use of AMCs, a number of states enacted laws beginning in 2009 to register and regulate AMCs operating within their jurisdictions, according to officials from several state appraiser regulatory boards. These officials told us that these laws typically contained several common elements, including requiring AMCs to have processes in place for adding appraisers to their panels, reviewing appraisers’ work, and keeping records of appraisal orders and activities. However, they said that some states have not adopted such laws, and existing state laws provide differing levels of oversight. For example, while a number of states require AMCs to certify that they have the above processes in place, Utah also requires AMCs to provide a written explanation of those processes as a condition of registering. Similarly, while some state laws do not specify requirements for AMC appraisal reviewers, Vermont requires reviews that address technical aspects of the appraisal to be performed by appraisers with credentials equal to or greater than the minimum required to perform the original appraisal assignment. Some appraiser groups and other appraisal industry participants have expressed concern that existing oversight may not provide adequate assurance that AMCs are complying with industry standards and their own policies and procedures, with negative impacts on appraisal quality. Although they did not provide us with data to demonstrate a change in quality, these participants suggested that the practices of some AMCs for selecting appraisers, reviewing appraisal reports, and establishing qualifications for appraisal reviewers—key areas addressed in federal guidelines for lenders’ appraisal functions—may have led to a decline in appraisal quality:  Selecting appraisers. Appraiser groups said that some AMCs select appraisers based on who will accept the lowest fee and complete the appraisal report the fastest rather than on who is the most qualified, has the appropriate experience, and is familiar with the relevant neighborhood. They said that, with many experienced appraisers departing from the industry, less experienced appraisers, who are often willing to accept lower fees, are left to perform most of the work.  Reviewing appraisal reports. According to some appraisal industry groups, some AMCs’ appraisal reviews overemphasize how close the appraiser’s value conclusion is to an expected value generated by an AVM, at the expense of other important elements of the appraisal, such as the appropriateness of the comparable sales. One group noted instances in which AMCs told appraisers which comparable sales to use when the appraisers’ original value conclusions were not consistent with AVM-generated values.  Establishing qualifications for appraisal reviewers. Representatives of an appraisal industry group told us that some AMC reviewers may lack the expertise necessary to identify problems with quality. They noted that in some states appraiser licensing and certification requirements do not address qualifications for appraisal reviewers. AMC officials we spoke with said that they have processes and standards that address these areas of concern. Several AMC officials told us they have vetting processes to select appraisers for their panels, including minimum requirements for years of appraising experience and education. When selecting appraisers for a specific assignment, these AMCs indicated that they use an automated system that identifies the most qualified appraiser based on criteria such as the requirements for the assignment, the appraiser’s geographic proximity to the subject property, and performance metrics such as timeliness and the quality of appraisers’ work. The AMC officials we spoke with said they allow appraisers to specify how much they will charge for different types of appraisal assignments and, in some cases, provide appraisers with the range of fees their peers on the appraiser panel charge. These officials said they compare fees only when two appraisers are equally qualified for an assignment, in which case they might default to the appraiser with the lower fee. Further, these officials said that when performing quality reviews on appraisals, they run automated checks to identify any problems with completeness and internal consistency. These reviews may also involve comparing the appraiser’s estimated value to a value generated by an AVM. Appraisals flagged for potential problems, such as risk of overvaluation, are manually reviewed by staff reviewers, who often have backgrounds in underwriting or appraising. One AMC official told us that their reviewers also provide coaching for less experienced appraisers to help them improve the quality of their appraisal reports. The enterprises and some lenders we spoke with told us that appraisal quality had improved after HVCC was adopted, although they could not specifically tie the quality improvements they observed to the use of AMCs. Some industry participants noted that other market changes that were occurring at the same time HVCC was implemented could have contributed to an improvement in appraisal quality, such as the enterprises’ requirement in 2009 that appraisers also complete the market conditions addendum form (as previously discussed in connection with its impact on appraisal costs). Nevertheless, the enterprises told us that variances between the values in the appraisal reports and values produced by their proprietary AVMs decreased after HVCC went into effect—in particular, for mortgages from third-party originators, including mortgage brokers. In addition, officials from one lender said that once HVCC went into effect, they required appraisals for mortgages in their broker channel to be ordered through AMCs and, on the basis of similar internal metrics that compare AVM-generated values to appraised values, observed improvements in appraisal quality. Officials from the enterprises told us that once they have obtained data through UMDP and evaluated its quality, they may be able to use the data to assess the appraisal quality of individual AMCs and appraisers. While views on the impact of AMCs on appraisal quality differ, Congress recognized the importance of additional AMC oversight in enacting the Act by requiring each state to register and regulate AMCs and placing the supervision of AMCs with state appraiser regulatory boards. In addition, the Act requires the federal banking regulators, along with FHFA and the Bureau of Consumer Financial Protection, to establish minimum standards for states to apply when registering AMCs, including requirements that appraisals coordinated by an AMC comply with USPAP and be conducted independently and free from inappropriate influence and coercion. This rulemaking also provides a potential avenue for reinforcing existing federal requirements for key functions that may impact appraisal quality, such as selecting appraisers, reviewing appraisals, and establishing qualifications for appraisal reviewers. Federal guidelines for lenders address these functions and require that lenders take steps to ensure that AMCs comply with the guidelines when lenders rely on AMCs to perform these functions. However, federal regulators do not directly monitor AMCs’ compliance with the guidelines; direct oversight of AMCs will be instead performed by state regulators, with the Appraisal Subcommittee monitoring state AMC oversight. If state standards do not also address these functions, state oversight of AMCs may not provide adequate assurance that these functions are being properly carried out. Because appraisals provide an estimate of market value at a particular point in time, they are affected by changes in the housing and mortgage markets. In recent years, turmoil in these markets has heightened attention on residential property valuations, and appraisals in particular. The prominent role of appraisals in the mortgage market underscores the importance of efforts to better ensure appraisal quality. HVCC, the Act, and federal banking regulator guidance have sought to address some of the factors that can affect appraisal quality, including appraiser independence and compensation. In addition, the enterprises are undertaking an initiative to collect detailed and standardized appraisal data that could provide them with greater insight into appraisal practices for the mortgages they purchase. Partly in reaction to appraiser independence requirements, lenders have increasingly relied upon AMCs to perform certain functions. Despite the increased use of AMCs, direct federal oversight of AMCs is limited because the focus of regulators is primarily on lenders, and state-level requirements for AMCs are uneven, ranging from no laws to laws with specific standards for registering with the state. Some appraisal industry participants have raised concerns that the management practices of some AMCs may be negatively affecting appraisal quality. Among the areas of concern are AMCs’ practices for key functions, including selecting appraisers for assignments, reviewing completed appraisal reports, and establishing qualifications for appraisal reviewers. The federal banking regulators have emphasized the importance of these functions in guidelines that apply to lenders’ appraisal functions. The Act requires the federal banking regulators and other federal agencies to set minimum state standards for registering AMCs, which provides an opportunity for the regulators to address these areas of concern and promote more consistent oversight of these functions, whether performed by lenders or AMCs. Doing so could help to provide greater assurance to lenders, the enterprises, and federal agencies of the quality of the appraisals provided by AMCs. To help ensure more consistent and effective oversight of the appraisal industry, we recommend that the heads of FDIC, the Federal Reserve, FHFA, NCUA, OCC, and the Bureau of Consumer Financial Protection— as part of their joint rulemaking required under the Act—consider including the following areas when developing minimum standards for state registration of AMCs: criteria for selecting appraisers for appraisal orders, review of completed appraisals, and qualifications for appraisal reviewers. We provided a draft of this report to FDIC, the Federal Reserve, NCUA, OCC, and OTS, as well as FHFA, HUD, USDA, and VA, for their review and comment. We received written comments from the Director of Risk Management Supervision, FDIC; the Directors of the Divisions of Banking Supervision and Regulation and Consumer and Community Affairs, Federal Reserve; the Executive Director of NCUA; the Acting Comptroller of the Currency; and the Acting Director of FHFA that are reprinted in appendixes II through VI. We also received technical comments from FDIC, the Federal Reserve, FHFA, HUD, and OCC, which we incorporated where appropriate. OTS, USDA, and VA did not provide comments on the draft report. The Bureau of Consumer Financial Protection did not receive the draft report in time to provide comments. In their written comments, the federal banking regulators (FDIC, the Federal Reserve, NCUA, and OCC) and FHFA agreed with or indicated they will consider our recommendation to address specific areas as part of joint rulemaking to develop minimum standards for state registration of AMCs. In its written response, the Federal Reserve said that it would consider our recommendation in developing rules to establish minimum standards. It also cited various regulations and guidance it and other agencies have issued related to appraiser independence since the 1990s. While agreeing with our recommendation, OCC noted in its written comments that improved oversight of AMCs by states does not diminish federally regulated institutions’ responsibility to ensure that services performed on their behalf by AMCs comply with applicable laws, regulations, and guidelines. Finally, FHFA in its written response agreed that the joint rulemaking process should consider the areas we mention in our recommendation. While it also noted that the data in the report did not capture differences between the enterprises’ practices, it noted that the report discusses that lenders may and do require appraisals beyond what is required by the enterprises. We are sending copies of this report to the appropriate congressional committees, the Chairman of FDIC, the Chairman of the Federal Reserve, the Acting Director of FHFA, the Secretary of Housing and Urban Development, the Chairman of NCUA, the Acting Comptroller of the Currency, the Acting Director of OTS, the Secretary of Agriculture, the Secretary of Veterans Affairs, the Bureau of Consumer Financial Protection, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. This report focuses on valuations of single-family residential properties for first-lien purchase and refinance mortgages. We examine (1) the use of different valuation methods and their advantages and disadvantages; (2) factors that affect consumer costs and requirements for disclosing appraisal costs and valuation reports to consumers; and (3) conflict-of- interest and appraiser selection policies, and views on the impact of these policies on industry stakeholders and appraisal quality. We also consider the impact of the Home Valuation Code of Conduct (HVCC) throughout the report. To describe how often different valuation methods are used, we analyzed valuation data from various sources for mortgages originated in calendar years 2006 through 2010. We requested aggregated data on valuations for mortgages originated in these years from Fannie Mae and Freddie Mac (the enterprises), the five largest lenders (as determined by the dollar volume of total mortgage originations in 2010), six of the largest appraisal management companies (AMC) (as identified by industry trade associations), and three private vendors of mortgage and valuation technology. In response to our request, we obtained proprietary data from the enterprises, five lenders (Ally Financial, Inc.; Bank of America, NA; J.P. Morgan Chase Bank, NA; CitiMortgage, Inc.; and Wells Fargo Bank, NA), four AMCs (CoreLogic, Landsafe, LSI, and PCV/Murcor), and one private vendor (FNC, Inc.). Data from each group of entities provide a partial picture of the valuation methods used in purchase and refinance mortgage originations and overlap with each other to a certain degree. The datasets we assembled are unique and therefore difficult to cross- check with other known sources to check their reliability. However, we were able to corroborate some data elements through interviews, and we used each of the datasets we assembled and other proprietary data we obtained to corroborate the other datasets. As a result, we believe that these data are sufficiently reliable for the purpose of this report, keeping in mind the following limitations. Because some of the entities compiled the requested information differently or were reporting information that is not a part of their normal data collection and retention apparatus, our datasets contain various degrees of inconsistency, missing data, and other issues. The data from the enterprises presented in this report only include mortgages originated using their own automated underwriting system. As a result, the data do not reflect mortgages that (1) lenders originated using manual underwriting; (2) lenders originated using their own, enterprise-approved automated underwriting systems; or (3) were originated using the automated underwriting system of one enterprise but purchased by the other enterprise. Data from the lenders often did not include information on mortgages originated through their broker or correspondent channels. In addition, data from the early part of the 5-year period we examined were limited, in part because (according to officials from some of the lenders) mergers with other financial institutions and data system changes prevented them from accessing these data. For these reasons, we have characterized our results in a manner that minimizes the reliability concerns (e.g., by focusing on 2009 and 2010) and emphasizes the points on which the data are corroborated. Our interviews with federal agencies, lenders, AMCs, appraisers, and other industry stakeholders provided clarification of data elements and additional perspectives on the use of different valuation methods in mortgage transactions. Given these and other steps we have taken, we believe the data are sufficiently reliable for the purposes used in this study. The enterprises provided us with data on the minimum valuation method they required for mortgages they purchased. Table 1 shows the percentage of total mortgage originations (by dollar volume) that enterprise purchases accounted for in each of the years we examined. As previously noted, the data from the enterprises used in this report cover mortgages that were originated using their automated underwriting systems and therefore represent only a portion of the total mortgages they purchased. Table 2 shows the percentage of the enterprises’ mortgage purchases each year that were originated using their automated underwriting systems, excluding certain refinance mortgages originated under the Home Affordable Refinance Program. The five lenders cited previously provided us with data on the valuations they obtained for mortgages they made. These lenders accounted for about 64 percent of mortgage originations in 2009 (excluding home equity loans) and 66 percent in 2010. As discussed earlier, the lender data did not cover all of their mortgage originations. Table 3 shows the percentage of each lender’s mortgages for which they provided valuation data. The four AMCs cited previously provided us with data on the valuations they provided to lenders. For many appraisals, some AMCs were unable to identify whether the appraisals were for mortgage originations (as opposed to other purposes, such as servicing and portfolio management or removal of mortgage insurance) and, if they were, whether they were for home purchases or refinancing existing mortgages. In addition, two of the six AMCs we spoke with did not provide us with data. As a result, the AMC data we obtained represented a small but undetermined portion of the mortgage market and were of limited use for purposes other than corroborating other datasets. FNC, Inc. is a mortgage technology company that, among other things, provides software platforms for lenders, appraisers, and other participants in the mortgage origination process. It captures appraisal data electronically that pass through its systems and uses the information to build analytical tools for its clients, which include several national lenders, as well as various regional and community lenders. The share of the mortgage market for which FNC captures data has increased over time, reaching about 20 percent in 2010. We interviewed knowledgeable FNC officials about their processes and data controls to assess data reliability. In general, FNC was able to provide us with valuation data for approximately 80 percent of the appraisals it identified as being for purchase or refinance mortgages. These data provide some insight into how often different appraisal approaches are used, though they may not be representative of the mortgage market as a whole. To identify the potential advantages and disadvantages of the different valuation methods, we reviewed relevant research studies and articles that examine the strengths and limitations of the different valuation methods and the potential effects on the reliability of appraisals. We also interviewed representatives from the federal banking regulatory agencies (the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, the Federal Deposit Insurance Corporation, and the National Credit Union Administration), federal agencies with mortgage insurance or guarantee programs (the Department of Housing and Urban Development’s Federal Housing Administration, the Department of Veterans Affairs, and the Department of Agriculture), the enterprises, appraisal industry groups, AMCs, mortgage lenders (including the five cited previously), mortgage industry associations (including those representing smaller and rural lenders), as well as other individual industry stakeholders and researchers. To examine the factors that affect appraisal costs, we reviewed federal and lender policies on fees, including fee schedules. We interviewed the aforementioned lenders and AMCs and representatives from mortgage and appraisal industry associations to identify the factors that may affect valuation costs, including any that may have caused changes in consumer costs over time. Because our interviews with individual lenders and AMCs focused on larger companies, the views they expressed may not be representative of these industries as a whole. To examine disclosures to consumers, we (1) reviewed and summarized statutes and policies, such as the Real Estate Settlement Procedures Act, that govern the disclosure of costs and valuation documentation to consumers and (2) interviewed federal officials and lenders to ensure our understanding of these requirements. To assess how HVCC affected appraisal costs and disclosures, we reviewed the relevant provisions in HVCC; analyzed information we obtained to identify any changes in costs that may be attributable to HVCC; and interviewed lenders and appraisers, among other industry stakeholders. To determine how federal policies, including HVCC, have addressed potential conflicts of interest and affected appraiser selection policies, we reviewed statutes, regulations, guidance, and federal banking regulators’ examination procedures covering appraiser independence requirements. We interviewed federal banking regulators, lenders, appraisers, AMCs, state regulatory officials, and other mortgage industry participants to discuss changes in policies and their impact on the appraisal process, industry participants, and appraisal quality. In addition, we interviewed the enterprises, lenders, and AMCs about the policies and procedures they have in place to assess and help ensure appraisal quality. We conducted this performance audit from July 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Westley (Assistant Director), Don Brown, Marquita Campbell, Anar Ladhani, John McGrail, Marc Molino, Erika Navarro, Jennifer Schwartz, and Andrew Stavisky made key contributions to this report.
Real estate valuations, which encompass appraisals and other estimation methods, have come under increased scrutiny in the wake of the recent mortgage crisis. The Dodd- Frank Wall Street Reform and Consumer Protection Act (the Act) mandated that GAO study the various valuation methods and the options available for selecting appraisers, as well as the Home Valuation Code of Conduct (HVCC), which established appraiser independence requirements for mortgages sold to Fannie Mae and Freddie Mac (the enterprises). GAO examined (1) the use of different valuation methods, (2) factors affecting consumer costs for appraisals and appraisal disclosure requirements, and (3) conflict-of-interest and appraiser selection policies and views on their impact. To address these objectives, GAO analyzed government and industry data; reviewed academic and industry literature; examined federal policies and regulations, professional standards, and internal policies and procedures of lenders and appraisal management companies (AMC); and interviewed a broad range of industry participants and observers.. Data GAO obtained from the enterprises and five of the largest mortgage lenders indicate that appraisals--which provide an estimate of market value at a point in time--are the most commonly used valuation method for first-lien residential mortgage originations, reflecting their perceived advantages relative to other methods. Other methods, such as broker price opinions and automated valuation models, are quicker and less costly but are viewed as less reliable and therefore generally are not used for most purchase and refinance mortgage originations. Although the enterprises and lenders GAO spoke with do not capture data on the prevalence of approaches used to perform appraisals, the sales comparison approach--in which the value is based on recent sales of similar properties--is required by the enterprises and the Federal Housing Administration and is reportedly used in nearly all appraisals. Recent policy changes may affect consumer costs for appraisals, while other policy changes have enhanced disclosures to consumers. Consumer costs for appraisals vary by geographic location, appraisal type, and complexity. However, the impact of recent policy changes on these costs is uncertain. Some appraisers are concerned that the fees they receive from AMCs--firms that manage the appraisal process on behalf of lenders--are too low. A new requirement to pay appraisers a customary and reasonable fee could affect consumer costs and appraisal quality, depending on how new rules are implemented. Other recent policy changes aim to provide lenders with a greater incentive to estimate costs accurately and require lenders to provide consumers with a copy of the valuation report prior to closing. Conflict-of-interest policies, including HVCC, have changed appraiser selection processes and the appraisal industry more broadly, which has raised concerns among some industry participants about the oversight of AMCs. Recently issued policies that reinforce prior requirements and guidance restrict who can select appraisers and prohibit coercing appraisers. In response to market changes and these requirements, some lenders turned to AMCs to select appraisers. Greater use of AMCs has raised questions about oversight of these firms and their impact on appraisal quality. Federal regulators and the enterprises said they hold lenders responsible for ensuring that AMCs' policies and practices meet their requirements for appraiser selection, appraisal review, and reviewer qualifications but that they generally do not directly examine AMCs' operations. Some industry participants said they are concerned that some AMCs may prioritize low costs and speed over quality and competence. The Act places the supervision of AMCs with state appraiser licensing boards and requires the federal banking regulators, the Federal Housing Finance Agency, and the Bureau of Consumer Financial Protection to establish minimum standards for states to apply in registering AMCs. A number of states began regulating AMCs in 2009, but the regulatory requirements vary. Setting minimum standards that address key functions AMCs perform on behalf of lenders would enhance oversight of appraisal services and provide greater assurance to lenders, the enterprises, and others of the credibility and quality of the appraisals provided by AMCs. GAO recommends that federal banking regulators, the Federal Housing Finance Agency (FHFA), and the Bureau of Consumer Financial Protection consider addressing several key areas, including criteria for selecting appraisers, as part of their joint rulemaking under the Act to set minimum standards for states to apply in registering AMCs. The federal banking regulators and FHFA agreed with or indicated they would consider the recommendation.
In fiscal year 1995, the U.S. Department of Agriculture (USDA) spent about $5.2 billion to provide the nation’s school-age children with nutritious foods and promote healthy eating choices through its National School Lunch Program. State agencies, usually departments of education, are responsible for the statewide administration of the lunch program through the disbursing of federal funds, monitoring of the program, and record keeping. Many of these responsibilities are carried out in cooperation with local school food authorities. Food authorities are responsible for managing school food services for one or more schools or for a school district. Schools have traditionally operated their own food services. However, some important changes in the way they provide meals have taken place since the 1980s. Some food authorities have contracted with private food service management companies (FSMC) to operate their school food services. In addition, some schools are offering brand-name fast foods as a part of the lunch program meal or as separate (a la carte) items. According to USDA, on a typical school day in fiscal year 1996, the lunch program provided about 26 million students with balanced and low-cost or free lunches nationwide. Of these students, about 25 million, or about 96 percent, attended public schools, and about 967,000, or about 4 percent, attended private schools. Within a school district, schools can choose to participate or not participate in the program. During fiscal year 1995, about 94,000 institutions, including about 89,000 schools and about 5,000 residential child care institutions participated in the lunch program, according to USDA. State education agencies usually administer the program through agreements with food authorities. The federal cost to support school lunches in fiscal year 1995 was about $5.2 billion, including about $613 million in federal commodity donations, such as beef patties, flour, and canned vegetables. The lunch program operates in all 50 states, the District of Columbia, and U.S. territories and possessions. Schools participating in the lunch program receive cash reimbursements and commodities from the federal government for each meal served. In turn, they must serve lunches that meet federal nutritional requirements and offer these lunches free or at a reduced price to children from families whose income falls below certain levels. For school year 1995-96, schools were reimbursed $1.795 for each free lunch, $1.395 for each reduced-price lunch, and 17.25 cents for each full-pay lunch. In addition, schools received 14.25 cents worth of commodity foods for each lunch served. These lunch program meal reimbursements and donated commodities help to sustain the food services provided by food authorities. However, in some areas, food authorities may incur meal costs that are below or above the lunch program’s reimbursements because of food, labor, and other food service-related cost variations, thus creating surpluses or deficits in some food service budgets. USDA has developed a “lunch pattern” for five different age and grade categories. (See app. II.) This pattern requires that a school lunch contain five food items chosen from the four basic food groups. The size of the portions varies by these categories; nevertheless each lunch, at a minimum, must contain (1) one serving of a meat or a meat alternate, (2) one serving of a bread or bread alternate, (3) one serving of milk, and (4) two servings of vegetables or fruits. Schools must offer all five food items unless, as provided by the lunch program’s regulations, they use the “offer versus serve” option. Under this option, a school must offer all five food items, but a student may decline one or two of them. All high schools must use the “offer versus serve” option, and middle and elementary schools may use it at the discretion of local officials. According to a 1993 report prepared for USDA, 71 percent of elementary schools and 90 percent of middle schools used this option. Participating schools also agree to collect data on the number of meals served and are responsible for other tasks, such as verifying the income of families with students to determine whether the students are eligible for free or reduced-price lunches. According to the American School Food Service Association, school lunch preparation usually occurs at individual or centrally located school kitchens. These facilities are operated by the food authorities or, with their approval, by others, such as FSMCs. In 1970, USDA issued regulations permitting food authorities to contract with FSMCs to operate their school food services. Food authorities may contract with FSMCs for many aspects of their school food service. The commercial organizations that typically contract with food authorities to operate food services include large national companies, such as Marriott, Canteen, and ARAMARK; companies operating regionally or at multiple sites in a state; and companies servicing a single school district. The services provided by FSMCs are likely to include some combination of the following management and operational service: Food services, including meal planning, food purchasing, storage, preparation, and packaging and serving the food to students. Accounting services and the design of financial controls, budgets, and reporting systems, including those required for state and federal reports. The design of facilities, maintenance and replacement of equipment, and cleaning services. Staffing and personnel management. Support activities, such as marketing and promotion of school meals, and nutrition information and education programs. USDA’s regulations stipulate that if a food authority contracts with an FSMC, the food authority must remain responsible for the overall operation of its food service to ensure that the program is administered in an accountable manner and that all of the program’s regulations are met. This responsibility requires the food authority to maintain direct involvement in the food service operation, such as monitoring the food service operation through periodic on-site visits. While food authorities have traditionally prepared their own foods for school lunches, many have begun to serve brand-name fast foods in recent years. These foods are ready-to-serve—for example, pizzas, burritos, subs, and sandwiches—and are generally prepared and delivered to schools by fast-food vendors such as Pizza Hut, Domino’s Pizza, Taco Bell, and Subway as well as by local vendors. Unlike FSMCs, these vendors usually do not manage schools’ food service operations. Instead, they provide schools with a food product at a specified time. For example, a pizza vendor may agree to provide a school with fresh, hot pizza for lunch on every other Wednesday. Unless a fast-food vendor operates as an FSMC, USDA does not allow these vendors to sell directly to students at school. Instead, these vendors typically sell their food products to a school or its FSMC, which, in turn, sells the products to the students. Schools can offer brand-name fast foods as part of a reimbursable lunch, as an a la carte item, or both. In the Healthy Meals for Healthy Americans Act of 1994 (P.L. 103-448), the Congress directed us to study the use of private food establishments and caterers by schools participating in the National School Lunch or School Breakfast Programs. In response to this mandate, and as agreed with the offices of the Senate Committee on Agriculture, Nutrition, and Forestry and the House Committees on Agriculture and on Economic and Educational Opportunities, we (1) determined the extent to which food authorities use FSMCs to operate their food services and the impacts that their use has had on various aspects of the lunch program, such as student participation, school food service employment, the generation of revenues through school meal sales, and a la carte sales of food in schools; (2) described the terms and conditions under which schools that participate in the lunch program use FSMCs; and (3) determined the extent to which schools that participate in the lunch program are provided with fast foods and snack foods in vending machines, described the most frequently used types and brands of fast foods commonly offered, and described their nutritional content. Because our preliminary work demonstrated that developing a nutritional profile of the hundreds of different food products available nationwide to students during school hours would be excessively costly, we discussed this issue with the offices of the cognizant committees. Given the technical complexities of the requirement and the limits on our resources and reporting time frame—our mandate required us to complete our work by September 1, 1996—we agreed with the cognizant committees to limit our work on the third objective to (1) presenting nutritional information for a sample of popular brand-name fast food products and (2) describing the types of vending machine foods commonly available in schools participating in the lunch program. As further agreed, we limited our review to the lunch program—the largest of USDA’s school meals programs. To address the first objective, we contacted each of the 50 states and the District of Columbia to obtain their school year 1994-95 lists of all food authorities, both public and private, and all food authorities using an FSMC. We then mailed questionnaires to 1,462 food authorities that states had identified as having contracts with FSMCs during the 1994-95 school year. In the course of our review, we identified 75 food authorities that were residential child care institutions, did not participate in the lunch program, or did not have contracts with FSMCs. We excluded these food authorities from the universe of 1,462 food authorities, thereby developing a universe of 1,387 food authorities. Eighty-five percent (1,175) of the remaining 1,387 food authorities returned a completed questionnaire. Hence, our survey results for this group represent only the 1,175 survey respondents that participated in the lunch program and had FSMC contracts during school year 1994-95. In addition, we mailed questionnaires to a national random sample of 934 of the food authorities that did not have contracts with FSMCs. Of those, 89 percent (835) of the food authorities returned completed questionnaires. However, 70 of the food authorities reported that they did not participate in the lunch program or did use an FSMC during school year 1994-95. These questionnaires were not included in our analysis. We used the responses from the remaining 765 questionnaires to compare food authorities that had FSMC contracts with those that did not. Our survey results represent the views of about 14,801 food authorities that do not have contracts with FSMCs. To address the second objective, we reviewed relevant federal regulations and USDA’s guidance on contracting with FSMCs and collected and analyzed a random sample of 68 food service contracts to identify the selected terms and conditions of these contracts and reviewed relevant federal studies and evaluations of FSMC contracts. The results of our analyses of the 68 contracts can be generalized to about 1,212 of the food authorities participating in the lunch program that had contracts with FSMCs for their school lunch programs during school year 1994-95. With respect to the third objective, we mailed questionnaires to a national random sample of 2,450 public school cafeteria managers to obtain information on the extent of their use of brand-name fast foods and the availability of snack foods in vending machines in public schools. The results of this survey are presented in our report entitled School Lunch Program: Cafeteria Managers’ Views on Food Wasted by Students (GAO/RCED-96-191, July 18, 1996). Of this sample, 1,887 cafeteria managers who participated in the lunch program returned a questionnaire. We summarized the data that the respondents provided us with to determine the extent to which brand-name fast foods were used in the lunch program and the types of snack foods sold to students a la carte from vending machines or by canteens during lunch. This information represents the views of cafeteria managers in about 80 percent of the public schools that participated in the lunch program nationwide. Three of our data collection strategies relied on statistical sampling, including the survey of food authorities not contracting with FSMCs, the selection of contracts between food authorities and FSMCs, and the survey of cafeteria managers. As with all sample surveys, our statistical estimates based on these data collection strategies contain sampling error—the potential error that arises from not collecting data from all food authorities—on all contracts or from cafeteria managers at all schools. We calculated the amount of sampling error for each estimate at the 95-percent confidence level. This means, for example, that if we repeatedly sampled food authorities from the same universe and performed our analysis again, 95 percent of the samples would yield results within the range specified by our survey estimate plus or minus the sampling error. This range is the 95-percent confidence interval. We conducted our review from June 1995 through July 1996 in accordance with generally accepted government auditing standards. Nationwide, about 8 percent of the food authorities participating in the lunch program used FSMCs in school year 1994-95, according to information from state agencies. This was up from about 4 percent in school year 1987-88, the last year that comparable data were available. Food authorities’ use of FSMCs is generally concentrated in the Northeast and the Midwest. In addition, food authorities using FSMCs had a larger number of schools and students than food authorities not using FSMCs. The food companies serving these food authorities were most often companies that operate nationwide. Most food authorities reported that they had decided to use FSMCs for financial reasons, such as reducing food service costs and reducing budget deficits. Furthermore, food authorities considering the use of FSMCs reported that budget deficits were one reason for examining such a change. In contrast, food authorities that were not using FSMCs cited their own financial stability as a reason they do not use FSMCs. Food authorities using food service companies generally reported better financial conditions for their food services for the 1995-96 school year than for the year before using FSMCs. Seventy-eight percent reported operating at a surplus or about even with their budgets compared with 27 percent operating at a surplus or about even with their budgets prior to using FSMCs. In addition, food authorities using FSMCs said that both their level of student participation in the lunch program and their a la carte sales had increased. Although these food authorities reported improved financial conditions, their average participation rates in the lunch program were below those of food authorities not using FSMCs. Although some food authorities participating in the lunch program have used FSMCs since the early 1970s, use of FSMCs by food authorities grew significantly during the 1980s and 1990s. Food authorities contracting with FSMCs are concentrated in certain areas of the country and have, on average, larger student populations. According to USDA’s Office of Inspector General and food authorities’ responses to our questionnaires, the percentage of food authorities using FSMCs doubled from school year 1987-88 through 1994-95, increasing from 4 to 8 percent of all food authorities. Food authorities with FSMC contracts reported that they provided meal services to about 7,500, or about 8 percent, of the approximately 89,000 public and private schools participating in the lunch program. Although the use of FSMCs increased nationwide, most food authorities using them were concentrated in the Northeast and the Midwest, according to state information and our survey results. Figure 2.1 shows the areas of concentration. Five states—Arkansas, Delaware, Louisiana, Nevada, and West Virginia—as well as the District of Columbia, had no food authorities using FSMCs in school year 1994-95. Furthermore, as table 2.1 shows, 10 states contained about three-fourths of the food authorities using FSMCs nationwide during school year 1994-95. The table also indicates the variation in the percentage of FSMC use within each of these states. Some of the 1,175 food authorities with FSMC contracts that responded to our questionnaire reported that they had used FSMCs for more than 20 years. However, the majority of these food authorities reported using FSMCs for a much shorter period. Figure 2.2 shows the number of years that these food authorities reported using FSMCs. Our analysis indicates that at the time of our survey, 10 years was the average amount of time that food authorities used FSMCs. According to the survey responses, the average size of the food service budgets of food authorities using and not using FSMCs was not significantly different. The food authorities using FSMCs, on average, had more schools and students in their school districts than food authorities not using FSMCs. These food authorities reported an average of 6.4 schools in their districts that participated in the lunch program, compared with an average of 4.7 (3.9 to 5.5) schools in districts not using FSMCs. Furthermore, food authorities using FSMCs reported higher enrollments in their districts—an average of 3,539 students—compared with an estimated average of 2,317 (1,889 to 2,745) students in districts not using FSMCs. We also found that of the food authorities using FSMCs, about 91 percent operate food services in public schools, and about 9 percent operate food services in private schools. While the food authorities using FSMCs were concentrated in certain sections of the nation, FSMCs were generally national companies. As figure 2.3 shows, 57 percent of the food authorities using FSMCs reported that they used food service companies that operate nationwide. Other food authorities used FSMCs that were local (operating within a state or at a single location) or regional (operating in more than one state) companies. Financial issues were frequently cited reasons for choosing, considering, or not choosing to use FSMCs, according to our survey results. About three-fourths of the food authorities that use FSMCs reported that they turned to them for financial reasons; 77 percent cited expectations of reducing food service costs as a major or moderate reason; and 70 percent cited expectations of reducing budget deficits as a major or moderate reason. While these reasons were cited most often as a major or moderate reason, food authorities also reported other considerations, including expectations of reducing administrative burden, increasing revenues, increasing student participation in the lunch program, increasing the nutritional value of the meals, having personnel or staffing concerns, and changing their employer/employee relationship with cafeteria staff. Figure 2.4 shows the frequency with which food authorities rated reasons listed in our questionnaire as either major or moderate. In addition, 2 to 4 percent of the food authorities not using FSMCs were considering their use. For these food authorities, financial concerns were also reasons why they might use FSMCs. Of these food authorities, 61 to 95 percent reported that one reason for considering a change was their belief that the use of FSMCs would reduce food service costs. The food authorities also indicated that reducing administrative burden was a reason for considering the use of FSMCs. Table 2.2 shows the frequency of reasons cited by food authorities for considering the use of FSMCs. In contrast, over half of the food authorities not using FSMCs indicated that they were not using FSMCs because of their own financial stability, among other reasons. From a list of reasons provided in our questionnaire, these food authorities cited the small size of their food service operation and their financial stability as reasons for not contracting with an FSMC. Over one-third of the food authorities indicated that it was the school board’s preference not to use FSMCs. A similar proportion indicated that they did not use FSMCs because of the good local perceptions regarding their operation of the food service. These and other reasons for not using FSMCs and the frequencies with which they were cited by food authorities are shown in figure 2.5. Seventy-eight percent of the food authorities using FSMCs reported that after using an FSMC, their food services were operating at about even with their budget or at a surplus—up from 27 percent prior to using an FSMC—in school year 1995-96. In comparison, the budgetary situation for these food authorities was about the same regarding reported budget deficits as that of food authorities not contracting with FSMCs. Food authorities using FSMCs reported that their costs for food, payroll, employee benefits, and administration were lower; student participation in the lunch program increased; and a la carte sales increased. Although the food authorities using FSMCs had improved their prior financial conditions, their average student participation rates were below those of food authorities that did not use FSMCs. After using FSMCs, 32 percent of the food authorities reported that their schools’ food service operated at a surplus; 46 percent reported operating at about even with their budgets; and 19 percent reported operating at a deficit. As figure 2.6 shows, food authorities improved their budget conditions after using FSMCs to the point where they were about the same regarding reported budget deficits as food authorities that were not using FSMCs. The figure also shows that 61 percent of the food authorities using FSMCs reported that prior to using FSMCs their schools’ food service operated at a deficit, while 20 percent reported operating at about even with their budgets. Only 7 percent of the food authorities reported operating their food service at a surplus prior to using an FSMC. As shown in figure 2.7, food authorities that used FSMCs generally reported reductions in various food service costs as a result of using FSMCs. Fifty-eight percent of the food authorities reported reduced food costs, and additional savings were reported in payroll, program administration, employee benefits, and cafeteria/kitchen supplies. Twenty-three percent of the food authorities reported cost reductions in cafeteria/kitchen equipment after using FSMCs. In addition to the budgetary improvements, food authorities reported the following other impacts from using FSMCs: Lunch program participation. Seventy-three percent of the food authorities using FSMCs reported increases in average student participation in the lunch program as a result of using FSMCs; 14 percent reported that it remained about the same; and 2 percent reported decreases. Sales of a la carte items. Seventy-four percent of the food authorities using FSMCs reported increases in the sales of a la carte items in their lunch program; 11 percent reported that their sales remained about the same; and 2 percent reported decreases. Students leaving school grounds. Among the food authorities using FSMCs and having schools that permit students to leave school grounds for lunch, 30 percent reported decreases in the number leaving as a result of using FSMCs; another 38 percent reported that the number remained about the same; and 7 percent reported an increase. (Twenty-five percent did not evaluate the effect of using an FSMC on students leaving school grounds.) In addition, 43 percent of the food authorities using FSMCs reported that most or all of their food service workers were retained by the school district when the food authorities began using an FSMC; 32 percent reported that all or most of their workers lost their jobs with the district but were rehired by the FSMC. (Our survey did not collect information on the possible changes in employee pay and benefits.) Thirty-six percent of the food authorities reported that their use of FSMCs resulted in a decrease in the number of school district employees overall. Also, a small percentage of food authorities reported that all or most of their staff retired, resigned, or were terminated by their district and not rehired by the FSMC. Finally, 36 percent of the food authorities using FSMCs reported that the amount of federal commodities they accept increased after using FSMCs; another 39 percent reported that their acceptance had remained constant; and 5 percent reported a decrease. Despite reported improvements in the budgetary situations of food authorities using FSMCs and reported increases in participation in the lunch program, these food authorities’ participation rates in the lunch program were lower than those reported by food authorities not using FSMCs. Our analysis shows that during school year 1995-96, the average participation rate for food authorities using food service companies was 49 percent, compared with 65 to 68 percent for those not using FSMCs. Food authorities’ contracts with FSMCs vary in content and in compliance with the selected federal requirements from the USDA guidance we reviewed. In addition to stating that FSMCs will prepare and serve school meals, the contracts assign responsibility for other meal-related services such as food purchasing and nutrition education to the FSMC in varying degrees. Furthermore, although most food service contracts state that food authorities will pay FSMCs using a cost-plus-a-fixed-fee payment structure, the types and number of fixed fees vary. Finally, about one-half to two-thirds of the FSMC contracts do not contain all provisions required by USDA’s guidance that we reviewed. The required provisions most often not found in the contracts were those intended to ensure that the food authorities maintain control of the school meals programs. While almost all FSMC contracts state that the FSMC is responsible for preparing and serving meals and identify which meals the FSMC will provide, the contracts vary with regard to other related services—such as food purchasing and nutrition education—that they assign to the FSMC. We found that some contracts assign responsibility for related meal services to the FSMC, some to the food authority, and some to both organizations. In addition, while most contracts contain provisions defining responsibilities for managing food service personnel, their treatment of issues affecting the employment of existing personnel varies. Our review indicates that almost all contracts state that the FSMC is responsible for preparing and serving meals. In addition, about 91 (84 to 98) percent of the contracts state that the FSMC will provide lunch, and 69 (58 to 80) percent state that the FSMC will provide breakfast. We also found that contracts specify a la carte service to be provided by the FSMC about as often as they specify breakfast. Table 3.1 shows the percentage of FSMCs’ contracts that provide for specific meal services. The FSMC contracts vary in the assignment of eight other related meal services we reviewed. Some contracts assign responsibility for these related meal services to the FSMC, some to the food authority, and some to both organizations. Eight services we examined included (1) purchasing food, (2) counting meals, (3) inventorying and storing food, (4) planning menus, (5) providing for nutrition education, (6) cleaning, (7) paying for utilities, and (8) repairing and maintaining equipment. As table 3.2 shows, it was common for contracts to assign up to three additional meal-related services to the FSMC, while few assigned more than three of these eight services to the FSMC. Table 3.3 shows the percentage of contracts assigning responsibility for various meal services to the FSMC, the food authority, or both. In addition to these eight services, we noted that FSMCs’ contracts assign responsibility for other related meal services. Some services typically assigned to the FSMC are (1) catering; (2) providing for laundry and towels, condiments, and eating utensils; (3) representing food authorities at meetings; and (4) evaluating the food service. Some responsibilities typically assigned to the food authority include providing gas and oil for vehicles, telephone service, and garbage removal. Most FSMC contracts define responsibilities for managing food service personnel, but they vary in their treatment of issues affecting the employment of existing personnel. On the basis of our review of FSMCs’ contracts, about 93 (86 to 99) percent of the FSMC contracts define responsibility for managing food service personnel in some fashion. More specifically, most of the contracts (82 to 97 percent) state that the FSMC will employ the food service manager. At least half (50 to 73 percent) of the contracts state that the FSMC will employ the food service staff. Other arrangements in the FSMC contracts specify that the food authority employ the staff (3 to 18 percent) and that the food authority and the FSMC each employ some of the food service staff (10 to 28 percent). In addition, our review showed that many of the FSMC contracts (41 to 65 percent) do not mention whether currently employed school food service staff will be retained by the food authority. However, some (9 to 27 percent) contracts state that the existing school staff will retain their jobs. Table 3.4 shows the percentage of FSMCs’ contracts containing language regarding the retention of existing school staff. Furthermore, FSMC contracts vary on whether they include provisions against the hiring of current FSMC employees by the food authority or the hiring of current food authority employees by the FSMC. About 50 (38 to 62) percent of the FSMC contracts contain language restricting the food authorities’ hiring of FSMC personnel. Conversely, 38 (27 to 50) percent of the FSMC contracts contain restrictions regarding the FSMCs’ hiring of food authority personnel. Most FSMC contracts we reviewed have a cost-plus-a-fixed-fee payment structure, but fees vary. In addition, some contracts address other financial arrangements, such as the treatment of rebates and discounts that the FSMC receives from purchasing food for the school meals programs and guarantees for a financial return or against a financial loss to the food authority. Under federal program regulations, FSMCs’ contracts may specify payments to the FSMC through either a (1) cost-plus-a-fixed-fee method or a (2) fixed-price or fee payment method. On the basis of our review, about 91 (84 to 98) percent of FSMCs’ contracts use the cost-plus-a-fixed-fee payment method. According to USDA guidance, under the cost-plus-a-fixed-fee method, the FSMC passes food service operating costs through to the food authority and charges an additional fixed- or flat-fee for management and administrative costs. Typically, the administrative fee represents overhead costs, and the management fee represents the profits. A cost-plus-a-fixed-fee payment structure may include one or more of these fees and may also be quantified as a per-meal fee and/or an annual fee. On the basis of our review, about 40 (28 to 51) percent of the FSMC contracts have only annual fees; 50 (38 to 62) percent have only per-meal fees; and 10 (3 to 18) percent have annual fees and per-meal fees. Table 3.5 shows the most common types of fixed fees and associated average dollar amounts. Although federal regulations allow another payment method—fixed-price or fee payment structure—few (0.3 to 11 percent) of the FSMC contracts specify this approach. According to USDA’s guidance, in a fixed-price or fee contract, charges are based on a unit charge. The unit may be per meal or per time period, typically a year. For example, the FSMC might charge $1.50 per meal, or $50,000 per year. In each instance, the fee charged is expected to cover all operating and administrative costs, and no additional costs are to be charged to the food authority. Two other types of financial payments—cost-plus-a-percentage-of-cost and a cost-plus-a-percentage-of-income—are not permitted under federal regulations (7 C.F.R. 210.16(c)). However, one contract that we reviewed specified a cost-plus-a-percentage-of-income payment structure. We are pursuing this issue with USDA officials. In addition to the payment structure specified in the FSMC contracts, contracts may contain language permitting the food authority and the FSMC to renegotiate payment terms. Such renegotiations could occur if actual experience does not conform to the assumptions upon which the original fee structure was based. On the basis of our review of FSMCs’ contracts, about 51 (40 to 63) percent of the FSMC contracts contain provisions allowing for payment adjustments. According to USDA’s guidance, as a control over purchasing, the FSMC’s contract should state how discounts that the FSMC obtains when purchasing food are to be passed through to the food authority. We found that many contracts do not address rebates and discounts and that some FSMC contracts contain provisions allowing FSMCs to receive some of the rebates and discounts obtained from vendors. As table 3.6 shows, FSMCs’ contracts vary depending on how these rebates and discounts are handled in the contracts. FSMCs’ contracts that permit the FSMC to retain some of the rebates/discounts also vary depending on who receives these discounts/rebates. For example, some contracts we reviewed state that only local discounts will be passed back to the food authority; other discounts or rebates, from such sources as regional and national purchasing arrangements, are to be retained by the FSMC. According to USDA’s guidance, FSMCs’ contracts may contain language that guarantees a financial return or provides for protection against a financial loss to the food authority. On the basis of our review of FSMCs’ contracts, about 18 (9 to 27) percent of the contracts contain a guarantee of surplus revenues. The average dollar amount of this guarantee was between $10,198 and $67,419. This type of guarantee was not always carried forward and in some cases was reduced when the contract was renewed. Of the 12 contracts we reviewed that initially guaranteed a surplus, 6 have contract renewals. Of those six, three continue the surplus guarantee in the current contract renewal. In two of those cases, the surplus guarantee was reduced when the contract was renewed. In addition, on the basis of our review of FSMCs’ contracts, about 44 (32 to 56) percent of FSMCs’ contracts contain provisions that guarantee against a financial deficit in operating the school meals programs. USDA’s guidance for food authorities’ contracts with FSMCs specifies a number of provisions that must appear in the contracts to ensure that federal requirements are met. State agencies are responsible for reviewing these contracts to ensure that all the required provisions are included. We reviewed FSMCs’ contracts to determine if they contained eight required provisions. We selected two provisions in each of the following four areas: (1) financial control, (2) USDA-donated foods, (3) monitoring and evaluation, and (4) duration and renewal of contracts. We found that about 57 (46 to 69) percent of the FSMC contracts do not contain all eight federally required provisions we reviewed. The required provisions that were most often not in the contracts were those intended to ensure that food authorities maintain control of the school meals programs. Table 3.7 shows the percentage of FSMCs’ contracts that do not contain one, two, three, or more of the eight federally required provisions we reviewed. Under federal requirements, FSMCs’ contracts must include a provision stating that the food authority retains control of the overall financial responsibility for the school meals programs, including the nonprofit school food service account. On the basis of our review of FSMCs’ contracts, about 35 (24 to 47) percent of FSMCs’ contracts do not contain this required provision. In addition, FSMCs’ contracts must include a provision reaffirming the food authority’s responsibility for establishing all prices for meals served under the nonprofit school food service account (e.g., pricing for all reimbursable meals, a la carte service and vending machines, and adult meals). Our review indicates that about 12 (4 to 19) percent of FSMCs’ contracts do not contain this required provision. Table 3.8 shows the percentage of FSMCs’ contracts that do not contain the required provisions we reviewed that address food authorities’ financial control responsibilities. Under federal requirements, all contracts must state that the food authority retain title to USDA-donated foods (such as fruit, vegetables, meat, and poultry). Some of FSMCs’ contracts do not contain this provision. In addition, food authorities are to ensure that these foods are used for the school meals programs. These USDA-donated foods offset the cost to food authorities of providing school meals. Few (3 of 68) of the FSMCs’ contracts we reviewed did not contain this provision. Table 3.9 shows the percentage of FSMCs’ contracts that do not contain the required provisions addressing food authorities’ responsibilities for USDA-donated foods. According to federal requirements, contract provisions must confirm the food authority’s responsibility to monitor the food service operation through periodic on-site visits. According to USDA’s guidance, the purpose of monitoring is to ensure that the FSMC complies with the contract and any other applicable federal, state, and local rules and regulations. On the basis of our review of FSMCs’ contracts, about 18 (9 to 27) percent of the contracts do not contain this required provision. In addition to spelling out the food authority’s monitoring responsibilities, the contract must state that FSMC’s records will be made available upon request to the Comptroller General, USDA, the state agency responsible for overseeing food authorities, and the food authority for audits and other types of evaluations to be conducted. On the basis of our review, about 10 (3 to 18) percent of FSMCs’ contracts do not contain parts of this requirement. Table 3.10 shows the percentage of FSMCs’ contracts that do not contain the required monitoring and evaluation provisions we reviewed. According to federal requirements, a contract must identify a beginning and ending date to ensure that the contract between the food authority and the FSMC is not longer than 1 year in duration. We found only 1 FSMC contract in the 68 we reviewed that did not contain provisions limiting the contract’s duration to 1 year or less. In addition, federal requirements stipulate that options for renewing FSMCs’ contracts may not exceed four additional 1-year extensions. While almost all of FSMCs’ contracts (66 of 68) we reviewed contain provisions for renewal at the end of 1 year, few (3 of 68) of the FSMC contracts we reviewed do not include the required renewal limit. USDA’s guidance for contracts with FSMCs specifies a number of provisions that must appear in these contracts to ensure that federal requirements are met. Required provisions include a range of terms and conditions addressing the food authority’s and the FSMC’s responsibilities in such areas as financial controls and payments; monitoring; the quality, extent, and general nature of the food service; controlling USDA donated foods; and various record-keeping and reporting functions. State agencies, according to USDA’s guidance, are responsible for reviewing these contracts to ensure that all the required provisions are included.According to USDA, the contract between a food authority and an FSMC is a major factor in ensuring a meal service that not only meets the best interest of the food authority but also conforms to federal, state, and local requirements. In addition, according to USDA, the contract is the basis for successful and appropriate oversight by the food authority. If food authorities’ contracts with FSMCs lack required provisions specified in USDA’s guidance, uncertainty may result about the responsibilities of each party and thereby diminish compliance with federal requirements. This uncertainty could occur even if a contract states that the FSMC will adhere to the lunch program’s regulations because USDA’s guidance is more specific than the regulations and specifies that the contracts must contain certain provisions. For example, the guidance requires the contract to include a provision that the food authority retain control of the school food service account and overall financial responsibility for the school nutrition program. In contrast, the lunch program’s regulation (7 C.F.R. 210.16(a)(4)) states that the food authority shall “retain control of the quality, extent, and general nature of food service.” In addition, since a contract may provide that it represents the entire agreement between the parties, the failure to require compliance with the guidance in the contract may mean that the FSMC is not bound by the required provisions in the guidance. While contracts between food authorities and FSMCs may properly vary in their assignments of responsibilities, they should not vary in their compliance with USDA’s guidance for contracting with FSMCs. If the provisions required by this guidance are not included in the contract, questions may arise over whether the FSMC is subject to these provisions. Consequently, such omissions could result in FSMCs’ noncompliance with the federal requirements for the lunch program. To achieve improved compliance with USDA’s guidance, we recommend that the Secretary of Agriculture direct the Administrator, Food and Consumer Service, to work with appropriate state officials to ensure that FSMCs’ contracts contain the provisions required by USDA’s guidance on contracting with FSMCs. We provided USDA’s Food and Consumer Service with copies of a draft of this report for review and comment. We met with agency officials including the Director of the Grants Management Division. USDA concurred with our recommendation and plans to take action. Planned actions include (1) sending a letter to appropriate state agencies reiterating the importance of including required provisions in FSMCs’ contracts and (2) making USDA’s guidance for contracting with FSMCs more readily available by placing it on the agency’s automated information system and the Internet. The percentage of public schools that participate in the lunch program and offer brand-name fast foods increased substantially from school year 1990-91 through school year 1995-96—from about 2 percent to about 13 percent. These schools offer one to two brand-name fast foods twice a week, on average, and generally offer them as part of a federally reimbursable lunch. Schools offering brand-name fast foods were more likely to be located in suburban areas and use an FSMC. They also have larger student populations on average. Most cafeteria managers at schools offering brand-name fast foods reported benefits from their use. Increased participation in the lunch program was the reason mentioned most often by cafeteria managers for offering these foods, and increased sales was the most frequently reported benefit. Most managers who did not use brand-name fast foods reported that they did not use them because they believed that the food they served was more nutritious. When coupled with other food items prescribed by the federal lunch pattern, brand-name fast foods can be incorporated into a lunch that is eligible for federal reimbursement. While most schools allowed students access to snack foods and/or drinks during lunch, fewer schools provided such items from vending machines. Cafeteria managers in 67 percent of the schools we surveyed reported that students had access to these foods from canteens and a la carte sales; in 20 percent of the schools, students had access to these items from vending machines. The percentage of schools offering brand-name fast foods increased from an estimated 2 percent in school year 1990-91 to about 13 percent in school year 1995-96, according to our analysis of the information the cafeteria managers provided us with. These schools offered one or two of these items two times a week, on average, and usually offered them as part of a federally reimbursable lunch. In addition, increased use of brand-name fast foods varied by several school characteristics, such as a school’s size and location. In the 1995-96 school year, an estimated 13 percent of the cafeteria managers in our survey reported using brand-name fast foods—up from about 2 percent in the 1990-91 school year. Figure 4.1 shows the percentage of schools offering brand-name fast foods at lunch since school year 1990-91. Moreover, 1 to 3 percent of the cafeteria managers reported that while their schools were not offering brand-name fast foods at the time of our survey, they were planning to offer them during the 1995-96 school year. Even though more schools were offering brand-name fast foods, the number of items offered and the frequency with which they were offered was somewhat limited. In the schools that offered these items, most cafeteria managers (60 to 72 percent) reported that they offered only one item, while others (24 to 36 percent) reported offering two or more items. In addition, brand-name fast foods were generally not offered every day but on an average of twice a week. Most schools (51 to 63 percent) offered a brand-name fast food once a week or less. About 19 (14 to 24) percent, offered an item every day. Most schools offering brand-name fast food items included them as part of a lunch that qualifies for federal reimbursement under the lunch program, the cafeteria managers reported. However, about 24 (18 to 29) percent of the schools serving brand-name fast foods reported that they offered them solely as a la carte items. The three types of brand-name fast foods that schools most frequently offered were pizza, burritos, and subs and other sandwiches, excluding hamburgers. Of those schools offering brand-name fast foods, 80 (74 to 85) percent offered pizza, 21 (16 to 26) percent offered burritos, and 11 (7 to 15) percent offered subs and/or sandwiches. According to cafeteria managers, four fast food vendors provided the bulk of brand-name fast foods for the schools using these items in school year 1995-96. Collectively, about 73 (68 to 79) percent of the schools that offered brand-name fast foods used one or more of these four vendors: 36 (30 to 42) percent of the schools used Pizza Hut; 27 (21 to 32) percent, Domino’s Pizza; 22 (17 to 27) percent, Taco Bell; and 6 (3 to 9) percent, Subway. Schools offering brand-name fast foods differed in a number of ways from those that did not. In particular, by school level, middle schools—about 25 (20 to 30) percent—and high schools—23 (18 to 28) percent—were more likely to offer these foods than elementary schools—9 (7 to 11) percent—during school year 1995-96. By location, suburban schools were more likely to offer brand-name fast foods than rural or urban schools. The difference between urban and rural schools was also significant. Approximately 22 (19 to 26) percent of the suburban schools used brand-name fast foods during the 1995-96 school year compared with 15 (12 to 19) percent of the schools in urban areas and 8 (7 to 10) percent of the schools in rural areas. In addition, regardless of school level or location, schools using and not using brand-name fast foods differed in the following areas: Student population. Schools offering brand-name fast foods were more likely to have larger student populations than schools not using these foods—an average of from 730 to 885 students compared with from 503 to 543 students, respectively. Cafeteria management. Schools offering brand-name fast foods were more likely to be managed by an FSMC than schools not offering these foods—about 18 (13 to 23) percent compared with 10 (8 to 11) percent, respectively. Offer versus serve. Elementary and middle schools offering brand-name fast foods were more likely to use the offer versus serve option than elementary and middle schools that did not—about 95 (91 to 98) percent compared with 84 (82 to 86) percent. Multiple entrees. Schools using brand-name fast foods were more likely to offer multiple entrees than schools that did not use these foods—about 83 (78 to 88) percent compared with 57 (54 to 59) percent. The use of brand-name fast foods benefited schools’ lunch service, according to most cafeteria managers we surveyed. They most often cited a desire to increase students’ participation in the lunch program as the reason for using brand-name fast foods, and they most frequently reported increased sales as a benefit. Those cafeteria managers not offering brand-name fast foods most frequently stated that the food currently being served was more nutritious as their reason for not offering those items. As figure 4.2 shows, 75 percent of the cafeteria managers at schools offering brand-name fast foods named increased student participation as the reason for turning to brand-name fast foods. Fifty-five percent said the students asked for brand-name fast foods, and another 46 percent said their food authority or district decided to provide students with brand-name fast foods. As shown in figure 4.3, in terms of benefits, cafeteria managers most often identified three changes following the introduction of brand-name fast foods: (1) 82 percent reported increased school lunch and a la carte sales, (2) 74 percent reported increased student satisfaction with the school lunch, and (3) 71 percent reported greater student participation. However, 6 percent of the managers said that they experienced no change in sales, and 1 percent reported a decrease. Schools’ use of brand-name fast foods appeared to have little effect on the number of schools’ food service workers. Sixty-four percent of the schools reported no change in the number of food service workers, another 5 percent reported a loss, and 10 percent reported a gain. According to cafeteria managers in 55 percent of the schools that did not use brand-name fast foods, their school did not use these foods because managers believed the food currently being served in their cafeteria was more nutritious. Thirty-six percent of the cafeteria managers said that their school did not use brand-name fast foods because these foods were too costly, and 35 percent reported that the food authority or school district prohibited their use. (See fig. 4.4.) Brand-name fast foods served alone do not qualify as a lunch meeting USDA’s nutritional standards and therefore are not eligible for federal reimbursement under the lunch program. However, meals that include brand-name fast foods and other foods prescribed by the federal lunch pattern, as discussed in chapter 1 and appendix II, can be eligible for federal reimbursement. Our analysis of available ingredient information for four fast foods—Pizza Hut’s pepperoni pizza, Domino’s pepperoni pizza, Taco Bell’s bean burrito, and Subway’s Club sandwich—and the lunch program’s requirements showed that these items can be incorporated into a lunch that qualifies for federal reimbursement under the program. Tables 4.1 through 4.4 show the contributions of the ingredients in the four fast food products to USDA’s prescribed lunch pattern requirements for group IV (ages 9 and older/grades 4 through 12). The tables also show examples of lunches that include these brand-name fast food items and could qualify for federal reimbursement under the program. Appendix II shows the federal lunch pattern requirements for the five age and grade categories. Appendix III identifies the nutrient content of these foods, as described by the fast food vendors. In most schools, students had access to snack foods from vending machines or other sources, such as school canteens, during the lunch period. About 67 (64 to 69) percent of the cafeteria managers said that their schools sold some type of snack food either a la carte or from a school canteen during lunch. According to the cafeteria managers, the most frequently available items were juice (51 percent); cakes, pastries, and cookies (47 percent); ice cream (44 percent); and fruits (42 percent). Nineteen (17 to 21) percent of the cafeteria managers reported selling some type of snack foods from vending machines during the lunch period. Juice (10 percent), carbonated soft drinks (10 percent), and chips (7 percent) were most frequently cited as being available to students via vending machines. Table 4.5 shows the types of snack foods available to students through vending machines, through school canteens, and a la carte during lunch. This appendix describes the survey methods we used to (1) determine the extent to which food authorities contracted with food service management companies (FSMCs) in school year 1994-95 and the impact of these companies on the school lunch program, (2) describe the terms and conditions in contracts between food authorities and FSMCs, and (3) determine the extent to which brand-name fast foods and vending machines are used in schools participating in the National School Lunch Program and obtain information on the most frequently used types and brands of these foods. We conducted two surveys—one of the states and the District of Columbia and one of food authorities—to determine the extent of FSMCs’ use and their impact on the lunch program. First, we sent two letters to the agencies responsible for administering the lunch program in each of the 50 states and the District of Columbia asking them to provide us with the names and addresses of (1) all food authorities in the state/District, both public and private, and (2) all food authorities with FSMC contracts in school year 1994-95. All states and the District provided us with both lists. To the extent possible, we eliminated camps and residential child care institutions from these two universes. The final universes included 19,248 and 1,462 food authorities, respectively, and included schools that did not participate in the lunch program. Thus, according to state/District agencies’ data, 7.6 percent of the food authorities had FSMC contracts in 1994-95. We then mailed questionnaires to all 1,462 food authorities identified by state/District agencies as having an FSMC contract and sent up to two follow-up mailings to encourage response. During the collection of the data, we identified 39 food authorities that were residential child care facilities, 5 food authorities that did not have contracts with FSMCs in school year 1994-1995, and 31 food authorities that had no schools in their district participating in the lunch program. Eighty-five percent (1,175) of the remaining 1,387 food authorities returned a completed questionnaire. Our survey results for this group represent only the 1,175 survey respondents that participated in the lunch program and had FSMC contracts. To compare food authorities that had FSMC contracts with ones that did not, we also surveyed the latter group. For the latter group, we drew a simple random sample of 1,000 food authorities from the universe of 19,248 authorities identified by the 50 states and the District of Columbia. We eliminated from this sample 66 food authorities (6.6 percent of the sample) that were included among the 1,462 food authorities that state/District agencies reported as having contracted with an FSMC in the 1994-95 school year. We mailed questionnaires to the 934 food authorities identified as not having FSMC contracts. Of these, 1.6 percent (15) responded that they were residential child care facilities. The majority, 89.4 percent (835) returned completed questionnaires. Of these, 53 reported that they did not participate in the lunch program, and 17 reported that they had FSMC contracts in school year 1994-95. We did not use data from these 70 questionnaires in our analysis. Therefore, we used the responses from the remaining 765 questionnaires to compare food authorities that had FSMC contracts with those that did not. Our survey results represent an estimated 14,801 food authorities participating in the lunch program that did not contract with FSMCs in school year 1994-95. To describe the terms and conditions contained in contracts between food authorities and FSMCs, we selected a simple random sample of 82 food authorities from the starting universe of 1,462 food authorities identified by the 50 states and the District of Columbia as having a contract with this type of company. We asked these food authorities to provide us with a copy of their current food service contract and related documents as well as their questionnaire response. In the course of our review, we determined that 11 of the 82 food authorities did not belong in the universe of food authorities with FSMC contracts because they were residential child care institutions, did not have a contract with an FSMC, or were not participating in the lunch program. Of the remaining 71 food authorities in our sample, 68 (95.8 percent) provided us with the contract documents we requested. We used a pro forma data collection instrument to code information on selected terms and conditions in the contracts. The results from our analyses of the contracts can be projected to an estimated 1,212 of the food authorities contracting with food service management companies in school year 1994-95. To determine the extent to which schools use brand-name fast foods in the school lunch program and permit the use of vending machines, we surveyed public school cafeteria managers about their lunch program. We selected a simple random sample of 2,450 schools from the 87,100 schools listed in the National Center for Education Statistics’ Common Core of Data Public School Universe, 1993-94 (Common Core of Data). Schools outside the 50 states and the District of Columbia were excluded from consideration. We sent a questionnaire to the cafeteria manager at each school and made up to two follow-up mailings to encourage response. Eighty percent (1,967) of those surveyed returned a questionnaire. Of these, 4 percent did not participate in the lunch program. We matched the remaining 1,887 survey responses to information about each school in the Common Core of Data. Our survey results for this survey represent an estimated 65,743 of the 81,911 public schools that participated in the lunch program in the 1993-94 school year. A number of the surveys were completed for the surveyed school’s district rather than the individual school. In those cases, we used information from the Common Core of Data to determine the surveyed school’s grade level and location. Unless otherwise stated in the survey response, we assumed that districtwide information held for the surveyed school. Three of our five data collection strategies relied on statistical sampling, including the survey of food authorities not contracting with food service management companies, the selection of contracts between food authorities and food service management companies, and the survey of cafeteria managers. As with all sample surveys, our statistical estimates that were based on these data collection strategies contain sampling error—the potential error that arises from not collecting data from all food authorities, on all contracts, or from cafeteria managers at all schools. The two data collection strategies not using statistical samples included the state survey concerning the prevalence of FSMC contracts and the survey to all food authorities with FSMC contracts. Those results do not contain sampling error. We calculated the amount of sampling error for each estimate at the 95-percent confidence level. This means, for example, that if we repeatedly sampled food authorities from the same universe and performed our analysis again, 95 percent of the samples would yield results within the range specified by our survey estimate plus or minus the sampling error. This range is the 95-percent confidence interval. In calculating the sampling errors, we did not make a correction for sampling from a finite population. The sampling error must also be taken into consideration when interpreting differences between subgroups of interest, such as food authorities that did and did not contract with FSMCs. For each contrast of subgroups that we reported, we calculated the statistical significance of any observed differences. Statistical significance means that the differences we observed between subgroups are larger than would be expected from the sampling error. When this occurs, some phenomenon other than chance is likely to have caused the difference. Statistical significance is absent when an observed difference between two subgroups, plus or minus the sampling error, results in a confidence interval that contains zero. It should be noted, however, that even in the absence of a statistically significant difference, a difference may exist. Instead, the sample size or number of respondents to a question may not have been sufficient to allow us to detect a difference. We used the chi square goodness of fit statistic to test for differences in percentages between food authorities that did and did not contract with FSMCs, and we used the one-sample t-test for differences in means. We used the chi square test of association to test for differences in percentages between subgroups of cafeteria managers, such as those located in rural versus suburban areas. We used the paired samples t-test to compare responses on two different questions within a questionnaire. Group I (ages 1-2/preschool) Group II (ages 3-4/preschool) Group III (ages 5-8/grades K-3) Group IV (ages 9 and older/grades 4-12) Group V (ages 12 and older/grades 7-12) 3/4 cup (6 fl. oz.) 3/4 cup (6 fl. oz.) 1/2 pint (8 fl. oz.) 1/2 pint (8 fl. oz.) 1/2 pint (8 fl. oz.) 5 per week (minimum of 1/2 per day) 8 per week (minimum of 1 per day) 8 per week (minimum of 1 per day) 8 per week (minimum of 1 per day) 10 per week (minimum of 1 per day) 1.5 oz. 1.5 oz. 2 oz. 3 oz. 1.5 oz. 1.5 oz. 2 oz. 3 oz. 3 tbsp. 3 tbsp. 4 tbsp. 6 tbsp. 0.50 oz. = 50% 0.75 oz. = 50% 0.75 oz. = 50% 1 oz. = 50% 1.5 oz. = 50% These items as listed in the program’s guidance may be used to meet no more than 50 percent of the requirement and must be used in combination with any of the following: lean meat, poultry, fish, cheese, large egg, cooked dry beans or peas, and peanut butter or other nut or seed butters. A combination of peanuts, soy nuts, tree nuts, or seeds can fulfill the meat/meat alternate requirement: 1 ounce of nuts or seeds equals 1 ounce of cooked lean meat, poultry, or fish. Calories (kc) Calcium (mg) Carbohydrates (g) Total fat (g) Saturated fat (g) Cholesterol (mg) Sodium (mg) Dietary fiber (g) Sugars (g) Protein (g) Vitamin A (IU) Vitamin C (mg) Iron (mg) Moisture (g) Ash (g) No information given. Thomas E. Slomba, Assistant Director Peter Bramble, Project Leader Carolyn Boyce Andrea Wamstad Brown Rebecca Johnson Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO examined the extent to which schools use private food service companies to operate their lunch programs, focusing on the: (1) terms and conditions of the contracts between schools and food service management companies (FSMC); and (2) percentage of schools offering brand-name fast foods that participate in the National School Lunch Program. GAO found that: (1) the number of food authorities participating in the school lunch program and contracting with FSMC has increased from 4 to 8 percent; (2) most food authorities use FSMC to reduce their budget deficit and increase revenue; (3) the advantages of using FSMC include paying lower costs for food, payroll, employee benefits, and administration; (4) schools with food service contracts have fewer students participating in school lunch programs than those schools not using FSMC; (5) more food service workers remain employed as a result of schools' contracting with FSMC; (6) food service contracts vary depending on the type of meal and the federal regulations governing the contracts; (7) most food service contracts require an annual fee, half stipulate a per-meal fee, and some stipulate both fees; (8) about one-half to two-thirds of FSMC contracts do not contain standard contractual provisions to ensure compliance with federal requirements; (9) the provisions most often omitted from the contracts are intended to ensure that the food authority maintains control of the school meals program; (10) the failure to include these provisions creates uncertainty regarding FSMC responsibilities and diminishes the food authority's ability to ensure that FSMC adheres to federal requirements; and (11) although the percentage of schools offering brand-name fast foods has increased, the number of items offered is limited.
Like many organizations, VA faces the possibility of computer system failures at the turn of the century due to incorrect information processing relating to dates. The reason for this is that in many systems, the year 2000 is indistinguishable from 1900, since the year is represented only by “00.” This could make veterans who are eligible for benefits and medical care appear ineligible. If this happens, the issuance of benefits and the provision of medical care that veterans rely on could be delayed or interrupted. As we reported last August, VBA had made progress in addressing the recommendations in our May 1997 report and making its information systems Y2K compliant. It reported it had renovated 75 percent of its mission-critical applications as of June 1998. At the same time, VHA reported it had assessed all and renovated the vast majority of its mission- critical information systems. Despite this progress, VBA was making limited progress in renovating two key mission-critical applications--the compensation and pension online application and the Beneficiary Identification and Record Locator Sub- System. And, except for its Insurance Service, VBA had not developed business continuity and contingency plans for its program services-- Compensation and Pension (the largest), Education, Loan Guaranty, and Vocational Rehabilitation and Counseling--to ensure that they would continue to operate if Y2K failures occurred. VHA’s Y2K program likewise had areas of concern. For example, although VHA’s medical facilities had hospital contingency plans, as required by the Joint Commission on Accreditation of Healthcare Organizations, they had not yet completed Y2K business continuity and contingency plans. To address these areas and to reduce the likelihood of delayed or interrupted benefits and health care services, we recommended that VA reassess its Y2K mission-critical efforts for the compensation and pension online application and the Beneficiary Identification and Record Locator Sub-System, as well as other information technology initiatives, such as special projects, to ensure that the Y2K efforts have adequate resources, including contract support, to achieve compliance in time; establish critical deadlines for the preparation of business continuity and contingency plans for each core business process or program service so that mission-critical functions affecting benefits delivery can be carried out even if software applications and commercial-off-the- shelf (COTS) products fail, including a description of resources, staff roles, procedures, and timetables needed for implementation; and ensure rapid development of business continuity and contingency plans for each medical facility so that mission-critical functions affecting patient care can be carried out if software applications, COTS products, and/or facility-related systems and equipment do not function properly, including a description of resources, staff roles, procedures, and timetables needed for implementation. VA has been responsive to our recommendations. For example, VBA reassessed its mission-critical efforts for the compensation and pension online application and the Beneficiary Identification and Record Locator Sub-System, as well as other information technology initiatives. It also reallocated resources to ensure that the Y2K efforts had adequate resources, including contract support, to achieve compliance. In addition, VBA completed a draft business continuity and contingency plan in January 1999 for its core business processes, as well as a related planning template for its regional offices. The plan provides a high-level overview of the resources, staff roles, procedures, and timetables for its implementation. It addresses risks, including mitigation actions to reduce the impact of Y2K-induced business failures, and analyzes the effect on each business line of a number of potential Y2K disasters--such as loss of electrical power, loss of communications, loss of data processing capabilities, and failure of internal infrastructure. According to VBA, the plan, which it expects to test this August, is an evolving document, to be revised and updated periodically until January 1, 2000. VBA’s plan makes no reference to contingencies for the failure of three of VBA’s benefit payment systems--Compensation and Pension, Education, and Vocational Rehabilitation and Counseling. However, it is currently developing a payment contingency plan for these systems and expects this to be completed in May 1999. A VBA official told us that the payment contingency plan should have been referenced in VBA’s business continuity and contingency plan and will be in future versions. The current plan also does not contain the designation of an information technology security coordinator and a physical security coordinator--individuals that VBA acknowledges are essential to the agency’s Y2K efforts--with responsibility for ensuring overall security for VBA's network and web site and backing up data storage before, during, and following January 1, 2000. This type of information will be necessary if security-related failures occur. According to VBA, it expects to designate these individuals by August 1999. VHA has also made progress in developing business continuity and contingency plans for its medical facilities. Last month, VHA issued its Patient-Focused Year 2000 Contingency Planning Guidebook to its medical facilities describing actions they can take to minimize Y2K-related disruptions to patient care. The guidebook discusses how the facilities should develop contingency plans for each major hospital function--such as radiology, pharmacy, and laboratory--as well as each major support function--such as telecommunications, facility systems, medical devices, and automated information systems. The guidebook also contains examples of plans, policies, and solutions for problems that a medical facility may face and provides Y2K templates describing the areas a facility should address by specific hospital function. VA provided this guidebook to the medical facilities early last month and expects the facilities to use it to prepare their individual business continuity and contingency plans, set to be completed by April 30. The guidebook stresses that these plans should be tested and suggests that the medical facilities begin testing in June. The guidebook addresses external emergency preparedness as well as internal operations. Specifically, it discusses three functions that medical facilities should perform in order to ensure that potential external hazards are considered and planned for. These are (1) performing an assessment of hazard vulnerabilities--that is, the types and kinds of Y2K problems that are anticipated within the community, (2) conducting an inventory of community resources--people, money, clinical space, supplies, and equipment--available to address these hazards, and (3) closing the gap between vulnerabilities and capabilities by putting into place measures that will mitigate potential disruptions in critical services by developing new working relationships with various government agencies, non-VA health care organizations, and vendors of critical supplies. In addition to implementing our recommendations, VA continues to make progress renovating, validating, and implementing its systems. On March 31, 1999, VA reported to the Office of Management and Budget (OMB) that the department has renovated and implemented all of the mission-critical applications supporting its 11 systems areas. As shown in table 1, VBA has six of these areas, and VHA has two. Complete and thorough Y2K testing is essential to providing reasonable assurance that new or modified systems will process dates correctly and will not jeopardize an organization’s ability to perform core business operations. Because the Y2K problem is so pervasive, potentially affecting an organization’s systems software, applications software, databases, hardware, firmware, embedded processors, telecommunications, and interfaces, the requisite testing can be extensive and expensive. Experience is showing that Y2K testing is consuming between 50 and 70 percent of a Y2K project’s time and resources. According to our Y2K guide, to be done effectively, testing should be planned and conducted in a structured and disciplined fashion. Our guide describes a step-by-step framework for managing Y2K testing, which includes the following key processes: Software unit testing to verify that the smallest defined module of software (individual subprograms or procedures) continues to work as intended. Software integration testing to verify that units of software, when combined, continue to work together as intended. Typically, integration testing focuses on ensuring that the interfaces work correctly and that the integrated software meets requirements. System acceptance testing to verify that the complete system--that is, the full complement of application software running on the target hardware and systems software infrastructure--satisfies specific requirements and is acceptable to users. This testing can be run separately or in some combination in an operational environment (actual or simulated) and collectively verifies that the entire system performs as expected. According to VBA and VHA officials, their testing criteria were based on their software development life cycle guidance documents. They said that upon completion of software unit and integration testing, a system is considered Y2K compliant. They said this type of testing had been completed for all of their mission-critical systems. As of March 31, 1999, neither VBA nor VHA had completed systems acceptance testing--which requires that each system be tested, including full forward-date testing, on a compliant platform--for all their mission- critical systems. Specifically, according to VBA officials, the agency had completed systems acceptance testing for half of its mission-critical systems--Insurance, Loan Guaranty, and Vocational Rehabilitation and Counseling. According to VBA’s March 1999 draft test plan, systems acceptance testing of the Compensation and Pension and most of the Education systems was to start in mid-April 1999. According to a VBA official, one of the reasons for the late systems testing was that the IBM platform at its Hines, Illinois, data center was not made Year 2000 compliant until the compiler was upgraded in February 1999. According to VBA, the Compensation and Pension and most of the Education systems will be future-date tested throughout April. VHA also plans to begin system acceptance testing of its mission-critical systems this month and complete it this June. According to VHA officials, they could not perform this type of testing before March of this year because VHA did not have a separate Y2K-compliant test environment to isolate the testing from the hospital systems in use. In addition to testing of individual systems, end-to-end testing of multiple systems is also critical. End-to-end testing, as defined in our test guide, verifies that a defined set of interrelated systems, which collectively support an organizational core business area or function, continues to work as intended in an operational environment, either actual or simulated. For example, in order to successfully process a compensation benefit payment to a veteran, VBA’s Compensation and Pension System must work correctly with its Beneficiary Identification and Records Locator Sub- System, Treasury’s Financial Management System, the Federal Reserve System, and financial institution systems. VBA and VHA plan to conduct end-to-end testing between now and this July. VBA is defining end-to-end testing as verification that core mission- critical business functions, including benefit payments and vendor and payroll payments, process correctly. The interfaces between VBA’s benefits system and Treasury’s Financial Management System are to be tested in May. VBA also plans to test transactions that interface with VHA systems, such as information related to veteran eligibility. VHA is defining end-to- end testing as verification that core mission-critical business functions, including patient-care transactions and vendor and payroll payments, process correctly. Once these tests are completed, VBA and VHA plan to conduct a “business process simulation” during the July 4, 1999, weekend. This simulation of day-to-day work at VA is to include users at the VBA regional offices and VHA test laboratories, who will simulate various transactions and process them through a set of interrelated systems necessary to complete a core business function. VBA expects to pretest the business process simulation during May. VA’s facility systems are essential to the continued delivery of health care services. For example, heating, ventilating, and air conditioning equipment is used by hospitals to ensure that contaminated air is confined to a specified area such as an isolation room or patient ward. If computer systems used to maintain these systems were to fail, any resulting climate fluctuations could affect patient safety. Despite their importance, VHA has not yet completed its assessment of facility systems. As of February 28, 1999, VHA medical facilities reported that they had assessed 55 percent of their facility systems. According to VHA’s Director of Safety and Technical Programs, the remaining 45 percent have not been fully assessed primarily because (1) facility systems tend to be a combination of unique elements that have to be separately assessed for compliance--a time-consuming process--and (2) VHA is still awaiting compliance status information from facility system manufacturers. VHA has not established milestones for completing its assessment and implementation of compliant facility systems. To help ensure that sufficient time remains to complete these activities, we recommend that VHA consider setting such deadlines. In the event that facility-related systems and equipment do not function properly due to Y2K problems, VHA medical facilities will need to ensure that they have business continuity and contingency plans addressing how mission-critical functions affecting patient care will be carried out. According to VHA’s Director of Safety and Technical Programs, most of its facility systems have some kind of manual override or reset that will allow them to continue functioning after a Y2K problem. He agreed, however, with the importance of developing contingency plans that fully document continued delivery of essential services in the event of a facility system failure. VHA medical facilities expect to have individual business continuity and contingency plans completed by April 30. On April 14, 1999, VA informed us that its February 28, 1999, report contained an error. The corrected numbers for facility systems at the end of February were 91 percent assessed and 9 percent not assessed. The question of whether medical devices such as magnetic resonance imaging (MRI) systems, x-ray machines, pacemakers, and cardiac monitoring equipment can be counted on to work reliably on and after January 1, 2000, is also critical to VHA. To the extent that biomedical equipment uses embedded computer chips, it is vulnerable to the Y2K problem. Such vulnerability carries with it possible safety risks. This could range from the more benign--such as incorrect formatting of a printout--to the most serious--such as incorrect operation of equipment with the potential to adversely affect the patient. The degree of risk depends in large part on the role the equipment plays in a patient’s care. Last September we testified before this Subcommittee that VHA was making progress in assessing its biomedical equipment, but that it did not know the full extent of the Y2K problem with this equipment because it had not received compliance information from 398 manufacturers (26.7 percent). According to VHA, as of March 16, 1999, the number of nonresponsive manufacturers had been reduced to 126 (8.5 percent). As shown in table 2, about 19 percent of the manufacturers in VHA’s database of suppliers had at least one biomedical equipment item that was either noncompliant or conditionally compliant. To identify specific biomedical equipment in the inventories of VHA’s medical facilities that still require Y2K compliance status information from manufacturers, VHA’s Chief Network Officer sent a letter to the directors of VHA's 22 Veterans Integrated Service Networks (VISN). This letter requested that they (1) review VHA’s list of manufacturers that have yet to respond and compare it with a list of manufacturers from whom their medical facilities still require compliance information and (2) indicate the equipment item that the facility owns for each manufacturer. According to VHA’s Y2K project director, as of mid-March--with 135 of 147 medical reporting sites--47 biomedical equipment items involving 35 manufacturers were identified as still requiring compliance status information. The project director told us that VHA medical facilities have been instructed to replace or eliminate equipment in their inventories for which they do not know the compliance status by June 30. According to VHA's February 1999 status report on medical devices, medical facilities estimated that the total cost of renovations will be about $41 million. We have previously reported that most manufacturers citing noncompliant products listed incorrect display of date and/or time as the Y2K problem. According to VA, these cases do not present a risk to patient safety because health care providers, such as physicians and nurses, can work around the problem. Of more serious concern are situations in which devices depend on date calculations--the results of which can be incorrect. One manufacturer cited the example of a product used for planning delivery of radiation treatment using a radioactive isotope as the source. An error in calculating the strength of the radiation source on the day of treatment could result in a dose that is too high or too low, which could have an adverse effect on the patient. Other examples of equipment presenting a risk to patient safety identified by manufacturers to FDA include hemodialysis delivery systems; therapeutic apheresis systems; alpha- fetoprotein kits for neural tube defects; various types of medical imaging equipment; and systems that store, track, and recall images in chronological order. To track the compliance status of its biomedical equipment, VHA uses a monthly status report on medical devices based on information provided by the VISNs. According to the February 1999 report, approximately 426,000 of 531,000 medical devices in VHA medical facilities are compliant. Of the remaining devices, 86,452 were identified as conditional-compliant or were not assessed for Y2K compliance because the manufacturers certified that the equipment contained no software or embedded chips, and 19,073 were reported as being noncompliant. Of the noncompliant devices identified, 15,621 are to be repaired, 1,582 are to be replaced, 757 are to be used as is, 255 are to be retired, and 858 are still awaiting a decision on the remedy. According to VHA’s Chief Biomedical Engineer, most of the noncompliant devices identified incorrectly displayed date/time. As we reported last September, FDA was also trying to determine the Y2K compliance status of biomedical equipment. Its goal is to provide a comprehensive, centralized source of information on the Y2K compliance status of biomedical equipment used in the United States and make this information publicly available on a web site. At the time, however, FDA had a disappointing response rate from manufacturers to its letter requesting compliance information. And, while FDA made this information available to the public, it was not detailed enough to be useful. Specifically, FDA’s list of compliant equipment lacked information on particular make and model. To provide more detailed information on the compliance status of biomedical equipment, as well as to integrate more detailed compliance information gathered by VHA, we recommended that VA and the Department of Health and Human Services (HHS) jointly develop a single data clearinghouse that provides such information to all users. We said development of the clearinghouse should involve representatives from the health care industry, such as the Department of Defense and the Health Industry Manufacturers Association. We recommended that the clearinghouse contain such information as (1) the compliance status of all biomedical equipment by make and model and (2) the identity of manufacturers that are no longer in business. We also recommended that VHA and FDA determine what actions should be taken regarding biomedical equipment manufacturers that have not provided compliance information. In response to our recommendation, FDA--in conjunction with VHA--has established the Federal Year 2000 Biomedical Equipment Clearinghouse. With the assistance of VHA, the Department of Defense, and the Health Industry Manufacturers Association, FDA has made progress in obtaining compliance-status information from manufacturers. For example, according to FDA, as of April 5, 1999, 4,251 biomedical equipment manufacturers had submitted data to the clearinghouse. As shown in figure 1, about 54 percent of the manufacturers reported having products that do not employ a date, while about 16 percent reported having date- related problems such as incorrect display of date/time. FDA is still awaiting responses from 399 manufacturers. FDA has also expanded the information in the clearinghouse. For example, users can now find information on manufacturers that have merged with or have been bought out by other firms. In collaboration with the National Patient Safety Partnership, FDA is in the process of obtaining more detailed information from manufacturers on noncompliant products, such as make and model and descriptions of the impact of the Y2K problem on products left uncorrected. We reported last September that VHA and FDA relied on manufacturers to validate, test, and certify that equipment is Y2K compliant. We also reported that there was no assurance that the manufacturers adequately addressed the Y2K problem for noncompliant equipment because FDA did not require medical device manufacturers to submit test results to it certifying compliance. Accordingly, we recommended that VA and HHS take prudent steps to jointly review manufacturers’ compliance test results for critical care/life support biomedical equipment. We were especially concerned that VA and FDA review test results for equipment previously determined to be noncompliant but now deemed by manufacturers to be compliant, or equipment for which concerns about compliance remain. We also recommended that VA and HHS determine what legislative, regulatory, or other changes were necessary to obtain assurances that the manufacturers’ equipment was compliant, including performing independent verification and validation of the manufacturers’ certifications. At the time, VA stated that it had no legislative or regulatory authority to implement the recommendation to review test results from manufacturers. In its response, HHS stated that it did not concur with our recommendation to review test results supporting medical device equipment manufacturers’ certifications that their equipment is compliant. It believed that the submission of appropriate certifications of compliance was sufficient to ensure that the certifying manufacturers are in compliance. HHS also stated that it did not have the resources to undertake such a review, yet we are not aware of HHS’ requesting resources from the Congress for this purpose. More recently, VHA’s Chief Biomedical Engineer told us that VHA medical facilities are not requesting test results for critical care/life support biomedical equipment; they also are not currently reviewing the test results available on manufacturers’ web sites. He said that VHA’s priority is determining the compliance status of its biomedical equipment inventory and replacing noncompliant equipment. The director of FDA’s Division of Electronics and Computer Science likewise said FDA sees no need to question manufacturers’ certifications. In contrast to VHA’s and FDA’s positions, some hospitals in the private sector believe that testing biomedical equipment is necessary to prove that they have exercised due diligence in the protection of patient health and safety. Officials at three hospitals told us that their biomedical engineers established their own test programs for biomedical equipment, and in many cases contacted the manufacturers for their test protocols. Several of these engineers informed us that their testing identified some noncompliant equipment that the manufacturers had certified as compliant. According to these engineers, to date, the equipment found to be noncompliant all had display problems and was not critical care/life support equipment. We were told that equipment found to be incorrectly certified as compliant included a cardiac catheterization unit, a pulse oxymeter, medical imaging equipment, and ultrasound equipment. VHA, FDA, and the Emergency Care Research Institute continue to believe that manufacturers are best qualified to analyze embedded systems or software to determine Y2K compliance. They further believe that manufacturers are the ones with full access to all design and operating parameters contained in the internal software or embedded chips in the equipment. VHA believes that such testing can potentially cause irreparable damage to expensive health care equipment, causing it to lock up or otherwise cease functioning. Further, a number of manufacturers also have recommended that users not conduct verification and validation testing. We continue to believe that rather than relying solely on manufacturers' certifications, organizations such as VHA or FDA can provide users of medical devices with a greater level of confidence that the devices are Y2K compliant through independent reviews of manufacturers’ compliance test results. The question of whether to independently verify and validate biomedical equipment that manufacturers have certified as compliant is one that must be addressed jointly by medical facilities’ clinical staff, biomedical engineers, and corporate management. The overriding criterion should be ensuring patient health and safety. Another critical component to VA’s ability to deliver health care at the turn of the century is ensuring that the automated systems supporting VHA’s medical facility pharmacies and its consolidated mail outpatient pharmacies (CMOP) are Y2K compliant. VHA reported that in 1998, it filled about 72 million prescriptions for 3.4 million veterans, at an estimated cost of about $2 billion. About half of the prescriptions were filled by the over 200 pharmacies located in VA’s medical centers, clinics, and nursing homes. These pharmacies rely on the pharmaceutical applications in the Veterans Health Information Systems Architecture (VISTA) for (1) drug distribution and inventory management, (2) dispensing of drugs to inpatients and outpatients, (3) patient medication information, and (4) an electronic connection between the pharmacies and the CMOPs. Y2K failures in these applications could impair the pharmacies’ ability to fill prescriptions. The remaining 50 percent of VHA’s prescriptions are filled by seven CMOPs, geographically located throughout the United States. These facilities are supported by automated systems provided by one of two contractors-- SI/Baker, Inc. and Siemens ElectroCom. For example, the CMOP electronically receives a prescription for a veteran through the medical center. The prescription is downloaded to highly automated dispensing equipment to be filled. The filled prescription is then validated by a pharmacist who compares the medication against a computerized image of the prescribed medication. Afterward, the prescription is packaged and an automatically generated mailing label is applied for delivery to the veteran. Finally, the medical center is electronically notified that the prescription has been filled. Because of the reliance on automation, the CMOPs’ ability to fill prescriptions could be delayed or interrupted if a Y2K failure occurred. VHA has determined that the automated systems supporting its CMOPs are not Y2K compliant. Specifically, neither of the systems provided by their contractors are Y2K compliant. According to the Y2K coordinator for the SI/Baker facilities, failure to make the SI/Baker systems Y2K compliant may delay the filling of outpatient prescriptions. The SI/Baker systems are used by three of VHA’s CMOPs--Hines, Illinois; Charleston, South Carolina; and Murfreesboro, Tennessee; they handle about 58 percent of all prescriptions filled by CMOPs. In contrast to the SI/Baker systems, according to a contractor hired by the CMOPs that use these systems, failure to make the Siemens ElectroCom systems Y2K compliant may result in delays in processing management reports for prescriptions filled, but not the actual filling of prescriptions. Although the CMOPs plan to replace their noncompliant systems with compliant ones, these systems are not scheduled to be implemented until mid- to late-1999. As shown in table 3, the earliest estimated completion date for implementing a compliant system is June 30, 1999, while the latest is December 1, 1999. This leaves little time to address any unexpected implementation problems. Given the late schedule for implementing compliant systems, it is crucial that the CMOPs develop business continuity and contingency plans to ensure that veterans will continue to receive their medications if these systems are not implemented in time or fail to operate properly. As of March 31, VA had not completed a business continuity and contingency plan for the CMOPs. The Y2K coordinator for the Siemens ElectroCom system has been tasked with developing this plan, which is to be completed by the end of this month. Further, VA did not include the CMOP systems in its quarterly reports of mission-critical systems to OMB. According to VHA’s Y2K project director, VHA considered the CMOP systems to be COTS products and, therefore, did not report them as mission-critical systems. Given the criticality of these systems to VHA’s ability to fill prescriptions at the turn of the century, we believe VA should reassess this decision, reporting CMOPs as mission- critical to VA top management and OMB to help ensure that necessary attention is paid to and action is taken on them. VA, like other users of pharmaceutical and medical-surgical products, needs to know whether it will have a sufficient supply of these items for its customers. Therefore, it has taken a leadership role in the federal government in determining whether manufacturers supplying these products to VHA are Y2K-ready. This information is essential to VHA’s medical facilities and CMOPs because of their “just-in-time” inventory policy. Accordingly, they must know whether their manufacturers’ processes, which are highly automated, are at risk, as well as whether the rest of the supply chain will function properly. To determine the Y2K readiness of their suppliers, on January 8, 1999, VA’s National Acquisition Center (NAC) sent a survey to 384 pharmaceutical firms and 459 medical-surgical firms with which it does business. The survey contained questions on the firms’ overall Y2K status and inquired about actions taken to assess, inventory, and plan for any perceived impact that the century turnover would have on their ability to operate at normal levels. In addition, the firms were asked to provide status information on progress made to become Y2K compliant and a reliable estimated date when compliance will be achieved for business processes such as (1) ordering and receipt of raw materials, (2) mixing and processing product, (3) completing final product processing, (4) packaging and labeling product, and (5) distributing finished product to distributors/ wholesalers and end customers. According to NAC officials, of the 455 firms that responded to the survey as of March 31, 1999, about 55 percent completed all or part of the survey. The remainder provided general information on their Y2K readiness status or literature on their efforts. As shown in table 4, more than half of the pharmaceutical firms surveyed responded (52 percent), with just less than one-third (32 percent) of those respondents reporting that they are compliant. Among the pharmaceutical firms that had not responded as of March 31, however, were two of VA’s five largest suppliers. The three large pharmaceutical suppliers that did respond provided general information on their Y2K readiness status, rather than answering the survey, and estimated that they will be compliant by June 30, 1999. Table 4 also shows that 54 percent of the medical-surgical firms surveyed responded, with about two-thirds of them (166) reporting that they are Y2K compliant. All five of VA’s largest medical-surgical suppliers have responded. Specifically, two reported being compliant, two reported they would be compliant by June 30, 1999, and the remaining supplier did not report an expected compliance date. On March 17, 1999, NAC sent a second letter to its pharmaceutical and medical-surgical firms, informing them of VA’s plans to make Y2K readiness information previously provided to VA available to the public through a web site (www.va.gov/oa&mm/nac/y2k). VA made the survey results available on its web site on April 13, 1999. The letter also requested that manufacturers that had not previously responded provide information on their readiness. NAC’s Executive Director said that he would personally contact any major VA supplier that does not respond. On a broader level, VHA has taken a leadership role in obtaining and sharing information on the Y2K readiness of the pharmaceutical industry. Specifically, VHA chairs the Year 2000 Pharmaceuticals Acquisitions and Distributions Subcommittee, which reports to the Chair of the President’s Council on Year 2000 Conversion. The purpose of this subcommittee is to bring together federal and pharmaceutical representatives to address issues concerning supply and distribution as they relate to the year 2000. The subcommittee consists of FDA, federal health care providers, and industry trade associations such as the Pharmaceutical Research and Manufacturers of America (PhRMA), the National Association of Chain Drug Stores, and the National Wholesale Druggists’ Association. Several of these trade associations have surveyed their members on their Y2K readiness and made the results available to the public. However, the information is not manufacturer-specific or as detailed as VHA's survey results. FDA’s oversight and regulatory responsibility for pharmaceutical and biological products is to ensure that they are safe and effective for public use. Because of its concern about the Y2K impact on manufacturers of these products, FDA has taken several actions to raise the Y2K awareness of the pharmaceutical and biological products industries. In addition, it is thinking about conducting a survey to determine the industry’s Y2K readiness. One of FDA’s actions to raise industry awareness was the January 1998 issuance of industry guidance by the Center for Biologics Evaluation and Research (CBER) on the Y2K impact of computer systems and software applications used in the manufacture of blood products. In addition, as shown in table 5, FDA has issued several letters to pharmaceutical and biological trade associations and sole-source drug manufacturers. Further, on February 11, 1999, FDA’s director of emergency and investigation operations sent a memorandum on FDA’s interim inspection policy for the Y2K problem to the directors of FDA’s investigations branch. The policy emphasizes FDA’s Y2K awareness efforts for manufacturers. It states that FDA inspectors are to (1) inform the firm of FDA’s Y2K web page (URL http://www.fda.gov/cdrh/yr2000/year2000.html), (2) provide the firm with copies of the appropriate FDA Y2K awareness letter, (3) explain that Y2K problems could potentially affect aspects of the firm’s operations, including some areas not regulated by FDA, and that FDA anticipates that firms will take prudent steps to ensure that they are not adversely affected by Y2K, and (4) provide firms with a copy of FDA’s compliance policy guide “Year 2000 (Y2K) Computer Problems.” In addition, FDA and PhRMA jointly held a government/industry forum on the Y2K preparedness of the pharmaceutical and biotech industries on February 22, 1999. The objectives of this forum were to (1) share information on Y2K programs conducted by health care providers, pharmaceutical companies, FDA, and other federal agencies, (2) provide a vehicle for networking, and (3) raise awareness. On March 29, 1999, FDA revised its February 11, 1999, interim inspection policy. The revision states that field inspectors are now to inquire about manufacturers’ efforts to ensure that their computer-controlled or date- sensitive manufacturing processes and distribution systems are Y2K compliant. Inspectors are to include this information in their reports, along with a determination of activities that firms have completed or started to ensure that they will be Y2K compliant. Further, FDA inspectors may review documentation in cases in which firms have made changes to their computerized production or manufacturing control systems to address Y2K problems. The purpose of this review is to ensure that the changes were made in accordance with the firms’ procedures and applicable regulations. If inspectors determine that a firm has not taken steps to ensure Y2K compliance, they are to notify their district managers and the responsible FDA center. FDA’s interim policy describes steps inspectors are to take in reviewing manufacturers’Y2K compliance. However, FDA stated that the primary focus of its inspections for the remainder of 1999 will be to ensure that products sold in the United States are safe and effective for public use and comply with federal statutes and regulations, including “good manufacturing practice” (GMP). FDA officials explained that the agency does not have sufficient resources to perform both regulatory oversight of the manufacturers and in-depth evaluations of firms’ Y2K compliance activities. Nevertheless, according to the March 29, 1999, memorandum, field inspectors are to note any concerns they may have with a firm’s Y2K readiness in the administrative remarks section of their inspection reports. These reports are to be reviewed by FDA district managers. If the Y2K concern appears to present a serious problem to a firm’s ability to produce safe, effective medication, the district manager can discuss this issue with FDA’s Office of Regulatory Affairs and determine a course of action. However, FDA officials have stressed that the agency cannot take any regulatory action toward the firm until a Y2K-related problem affects a pharmaceutical or biological product. Like VHA, FDA is interested in the impact of Y2K readiness of pharmaceutical and biological products on the availability of products for health care facilities and individual patients. FDA’s Acting Deputy Commissioner for Policy informed us on March 24, 1999, that the agency is thinking about surveying pharmaceutical and biological products manufacturers, distributors, product repackagers, and others in the drug dispensing chain, on their Y2K readiness and contingency planning. In anticipation of a possible survey, the agency has published a notice in the March 22, 1999, Federal Register regarding this matter. The Acting Deputy Commissioner said that potential survey questions on contingency planning would include steps the manufacturers are taking to ensure an adequate supply of bulk manufacturing materials from overseas suppliers. This is a key issue because, as we reported in March 1998, according to FDA, as much as 80 percent of the bulk pharmaceutical chemicals used by U.S. manufacturers to produce prescription drugs is imported. In summary, VBA and VHA continue to make progress in preparing their mission-critical systems for the year 2000. However, key actions remain to be taken in the areas of mission-critical systems testing, VHA facility systems compliance, and CMOP systems compliance. We also reiterate the need for VHA and FDA to take prudent steps to ensure that the test results of critical care/life support biomedical equipment are obtained and reviewed. Finally, VHA needs information on the Y2K readiness of specific pharmaceutical and medical-surgical manufacturers. Until this information is obtained and publicized, VHA medical facilities and veterans will remain in doubt as to whether an adequate supply of pharmaceutical and biological products will be available. FDA and the pharmaceutical and biological trade associations can play key roles in helping VHA obtain this information and publicize the results in a single data clearinghouse. In carrying out this assignment, we reviewed and analyzed VA's Y2K documents and plans, comparing them against our guidance on Y2K activities. We also reviewed and analyzed FDA documentation relating to its Y2K efforts on biomedical devices and pharmaceutical manufacturers. In addition, we visited selected VHA medical centers, VA data centers, and VHA consolidated mail outpatient pharmacies to discuss their Y2K activities, and interviewed VA and FDA officials on those activities. We also interviewed officials of the Emergency Care Research Institute regarding their statements on biomedical equipment testing. Finally, we interviewed selected private hospital officials about their Y2K actions and pharmaceutical trade associations on their Y2K readiness surveys of pharmaceutical manufacturers. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Department of Veterans Affairs (VA) year 2000 readiness, focusing on: (1) VA's ability to deliver benefits and health care services through the turn of the century; and (2) the readiness of automated systems that support such delivery, the compliance status of biomedical equipment used in patient care, and the year 2000 readiness of the pharmaceutical and medical-surgical manufacturers upon which VA relies. GAO noted that: (1) VA continues to make progress in its year 2000 readiness; (2) however, key actions remain to be performed; (3) for example, the Veterans Benefits Administration and Veterans Health Administration (VHA) have not yet completed testing of their mission-critical systems to ensure that these systems can reliably accept future dates--such as January 1, 2000; (4) also, VHA has not completed assessments for its facility systems, which can be essential to ensuring continuing health care; (5) in addition, neither VA nor the Food and Drug Administration have implemented GAO's prior recommendation to review the test results for biomedical equipment used in critical care/life support environments; (6) further, VHA's pharmaceutical operations are at risk because the automated systems supporting its consolidated mail outpatient pharmacies are not year 2000 compliant; (7) VHA does not know if its medical facilities will have a sufficient supply of pharmaceutical and medical-surgical supplies on hand, because it does not have complete information on the year 2000 readiness of these manufacturers; and (8) it is critical that these concerns be addressed if VA is to continue to reliably deliver benefits and health care.
In 1986, the Congress called for the establishment of a joint service special operations capability under a single command. In April 1987, the Secretary of Defense established the Special Operations Command with the mission to provide trained and combat-ready special operations forces to DOD’s geographic combatant commands. Section 167(e) of Title 10, U.S. Code directs that the Commander of the Special Operations Command be responsible for and have the authority to conduct all affairs of such command related to special operations activities. Under this section, the Commander is also responsible for and has the authority to conduct certain functions relating to special operations activities whether or not they relate to the Special Operations Command, including: preparing and submitting to the Secretary of Defense program recommendations and budget proposals for special operations forces and for other forces assigned to the Special Operations Command; exercising authority, direction, and control over the expenditure of funds; training assigned forces; and monitoring the promotions, assignments, retention, training, and professional military education of special operations forces officers. In addition, Section 167 directs the Special Operations Command to be responsible for the following activities as they relate to special operations: (1) direct action, (2) strategic reconnaissance, (3) unconventional warfare, (4) foreign internal defense, (5) civil affairs, (6) psychological operations, (7) counterterrorism, (8) humanitarian assistance, (9) theater search and rescue, and (10) other activities such as may be specified by the President or the Secretary of Defense. Appendix II defines these activities assigned to the Special Operations Command. DOD has also assigned additional activities to the Special Operations Command. Over the past 3 years, DOD has expanded the role of the Special Operations Command to include responsibility for planning and leading the department’s efforts in the Global War on Terrorism. In addition to training, organizing, equipping, and deploying combat-ready special operations forces to the geographic combatant commanders, the Command has the mission to lead, plan, synchronize, and, as directed, execute global operations against terrorist networks. The specific responsibilities assigned to the Special Operations Command include: integrating DOD strategy, plans, intelligence priorities, and operations against terrorist networks designated by the Secretary of Defense; planning campaigns against designated terrorist networks; prioritizing and synchronizing theater security cooperation activities, deployments, and capabilities that support campaigns against designated terrorist networks in coordination with the geographic combatant commanders; exercising command and control of operations in support of selected campaigns, as directed; and providing military representation to U.S. national and international agencies for matters related to U.S. and multinational campaigns against designated terrorist networks, as directed by the Secretary of Defense. In addition, the National Military Strategic Plan for the War on Terrorism establishes the approach DOD will take in fulfilling its role within the larger national strategy for combating terrorism. The strategy provides guidance on the department’s military objectives and their relative priority in the allocation of resources. In addition, this strategy implements the designation of the Special Operations Command as the supported combatant command for planning, synchronizing, and, as directed, executing global operations against terrorist networks. The Special Operations Command has received considerable increases in funding to meet its expanded responsibilities in the Global War on Terrorism. Specifically, funding for the Command has increased from more than $3.8 billion in fiscal year 2001 to more than $6.4 billion in fiscal year 2005. In addition, the Command received more than $5 billion in supplemental funds from fiscal year 2001 through fiscal year 2005. During this time, funding for military personnel costs for the Special Operations Command increased by more than $800 million, representing a 53 percent increase. DOD plans further increases in funding for the Command. The President’s fiscal year 2007 budget request for the Special Operations Command is $8 billion, and the department plans additional increases for the Command through fiscal year 2011. The Special Operations Command is comprised of special operations forces from each of the military services. In fiscal year 2005, personnel authorizations for Army special operations forces military personnel totaled more than 30,000, the Air Force 11,501, the Navy 6,255, and the Marine Corps 79. Roughly one-third of special operations forces military personnel were in DOD’s reserve components, including the Army, Navy, and Air Force Reserve, and the Army and Air National Guard. Figure 1 provides a summary of DOD’s special operations forces military authorizations in the active component and reserve component. Special operations forces are organized into several types of units. For example, Army special operations forces are organized into Special Forces, Rangers, Aviation, Civil Affairs, Psychological Operations, and support units. Air Force special operations forces are organized into fixed and rotary wing aviation squadrons, special tactics squadrons, a combat aviation advisor squadron, and an unmanned aerial vehicle squadron. Naval Special Warfare forces include SEAL Teams and SEAL Delivery Vehicle Teams and Special Boat Teams. When fully operational, Marine Corps special operations forces will include foreign military training units and marine special operations companies. Table 1 provides an overview and description of DOD’s special operations forces. Special operations forces personnel possess highly specialized skill sets including cultural and regional awareness. Duty in special operations is undertaken on a voluntary basis, and many personnel volunteering for special operations, particularly those in Army Special Forces and Air Force flight crews, have already served for some time in the military before becoming qualified for special operations forces. In order to become qualified, military personnel must complete a rigorous assessment, selection, and initial training process that, on average, takes between 12 and 24 months. This difficult training regime causes high attrition, and often over 70 percent who start special operations training do not finish. In general, servicemembers who are unable to complete the special operations training return to their previously held specialty or are retrained into another specialty, depending on the needs of their military service. The Special Operations Command’s Army, Air Force, and Navy service components have schools to train and develop special operations forces. For example: The U.S. Army Special Operations Command, located at Ft. Bragg, North Carolina, operates the John F. Kennedy Special Warfare Center and School. This school assesses, selects, and trains Special Forces soldiers, and trains civil affairs and psychological operations soldiers. In addition, the school provides advanced special operations training courses. The Air Force Special Operations Command, located at Hurlburt Field, Florida, has several subordinate training squadrons that provide initial and advanced training for Air Force rotary and fixed wing special operations pilots, special tactics personnel, combat aviation advisors, and unmanned aerial vehicle personnel. The Naval Special Warfare Command, located on the Naval Amphibious Base Coronado, California, operates the Naval Special Warfare Center. This school trains SEAL candidates through the Basic Underwater Demolition SEAL course and the SEAL Qualification Course, and trains special warfare combatant crewmen through the Special Warfare Combatant Crewmen course. In addition, the school provides advanced special operations training courses. The Special Operations Command has not yet fully determined all of the personnel requirements needed to meet its expanded mission. While the Command has determined the number of special operations forces personnel who are needed to increase the number of its warfighter units, it has not completed analyses to determine (a) the number of headquarters staff needed to train and equip these additional warfighters or (b) the number of headquarters staff needed to plan and synchronize global actions against terrorist networks—a new mission for the Command. Although the Command’s analyses for these determinations were in progress at the time of our review, DOD has nonetheless planned to increase the number of positions for the Command’s headquarters, and has requested related funds beginning in fiscal year 2007. Several recent DOD studies have concluded that additional special operations forces warfighters are needed in order for the Special Operations Command to achieve the national military objectives in the Global War on Terrorism. A December 2002 report conducted by the Office of the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict found that efforts should be made to expand the size of special operations forces and institute a more sustainable rotational base of forces, while realigning the force to meet current and future challenges. Furthermore, the February 2006 Quadrennial Defense Review Report stated that one of the key programmatic decisions the department proposes to launch in fiscal year 2007 is to increase special operations forces to defeat terrorist networks. The Special Operations Command has determined the number of special operations forces personnel needed to meet increases in its warfighter units. To determine the requirements for special operations forces warfighter units, the Command uses its Joint Mission Analysis process. Based on planning scenarios provided by DOD that special operations forces will be needed to support, the Command determines the minimum number of warfighters necessary to achieve its military objectives with the least amount of risk to mission success. This level of special operations forces is the baseline force used to measure risk, and is the starting point for developing a more attainable force based on fiscal constraints. Beginning in fiscal year 2002, DOD increased the number of positions for the Special Operations Command to augment the increase in the number of its warfighter units. Specifically, from fiscal year 2001 through fiscal year 2005, DOD increased the number of military positions for special operations forces by more than 5,000 positions, or about 12 percent. With these increases in military positions, the Special Operations Command has also increased the number of special operations forces units, including Army Civil Affairs and Psychological Operations units. DOD plans to further increase the number of military positions for the Command through fiscal year 2011, and the Command plans to increase other special operations forces units such as Army Special Forces, Navy SEALs, and Air Force unmanned aerial vehicle and intelligence squadrons. The increase in military positions will also support the establishment of a Marine Corps component to the Special Operations Command, which was approved in October 2005. Table 2 provides examples of increases in the number of active duty special operations forces warfighter units from fiscal year 2001 through fiscal year 2011. DOD’s budget request for fiscal year 2007 includes increases in the number of personnel for the Special Operations Command’s headquarters, even though the Command had not completed studies for headquarters’ personnel requirements in two key areas. First, the Commander of the Special Operations Command is responsible for training assigned special operations forces, and developing and acquiring special operations- peculiar equipment. Accordingly, the Command believes that it has a commensurate need for additional headquarters staff to perform these responsibilities to support the increased number of warfighters necessary to win the Global War on Terrorism. Second, DOD’s decision to expand the mission of the Special Operations Command calls for the Command to be responsible for planning and synchronizing global actions against terrorist networks. The Command further believes that it needs additional headquarters personnel to fulfill this responsibility. The Special Operations Command determines personnel requirements for its headquarters by conducting formal personnel studies. These studies are directed and approved by the Special Operations Command’s leadership. The study teams conduct a variety of analyses to determine personnel requirements and interview individuals within the reviewed organization to determine the tasks they perform and the level of effort necessary to fulfill the workload requirements. The studies are used to validate the personnel requirements and support data-based decisions for allocating additional resources during the Special Operations Command’s planning, programming, and budgeting processes. The Command is currently conducting studies to determine the number of military and civilian personnel who are needed at its headquarters to meet the Command’s expanded responsibilities. Although these studies were in progress at the time of our review, DOD has already made the decision to increase the number of military and civilian positions for the Command’s headquarters, beginning with its fiscal year 2007 budget request. According to currently approved plans, DOD will increase the number of military and civilian positions for the Special Operations Command headquarters by more than 75 percent between fiscal years 2007 and 2011. These increases include more than 700 additional positions for the Command’s Center for Special Operations, which combines the intelligence, operations, and planning functions at the headquarters to plan and direct the Global War on Terrorism. However, given the fact that the Command’s internal analyses of personnel requirements were ongoing at the time of our review, the intended increase is not based on a comprehensive analysis of personnel requirements. Our prior work has shown that strategic workforce planning addresses two critical needs for an organization. First, strategic workforce planning aligns an organization’s human capital program with its current and emerging mission and programmatic goals. Second, such planning develops long-term strategies for acquiring, developing, and retaining the staff needed to achieve programmatic goals. A key principle in strategic workforce planning calls for determining the critical skills and competencies that will be needed to achieve current and future programmatic results. However, until the Special Operations Command fully completes its analyses of the personnel requirements needed to carry out its Title 10 responsibilities and its expanded mission, it cannot provide assurances to the Secretary of Defense and the Congress that currently planned growth in the number of personnel for the Command’s headquarters will meet, exceed, or fall short of the requirements needed to address the Command’s expanded mission. The military services and the Special Operations Command have made progress since fiscal year 2000 in recruiting, training, and retaining special operations forces personnel; however, the military services and the Special Operations Command must overcome persistently low personnel inventory levels and insufficient numbers of newly trained special operations forces personnel in some cases to meet DOD’s plan to increase the number of special operations forces personnel through fiscal year 2011. In addition, the Special Operations Command does not have complete information from its service components on human capital challenges, including low personnel inventory levels and training limitations, and planned corrective actions, which it needs to evaluate the success of its service components’ human capital approaches. The military services and the Special Operations Command have taken measures to recruit and train greater numbers of special operations forces personnel. In addition, DOD has implemented a set of initiatives intended to retain greater numbers of experienced special operations forces personnel. The Army and Navy have increased the recruiting goals for several of their special operations forces occupational specialties. These goals are set by the military services to determine the number of accessions, or new recruits, who will enter training each year. From fiscal year 2000 to fiscal year 2005, the Army increased the recruiting goal for active duty enlisted Special Forces soldiers by 72 percent, or 1,300 recruits. Similarly, the Navy increased its annual goal for enlisted SEAL recruits from 900 in fiscal year 2004 to 1,100 in fiscal year 2005. In addition, the Navy established an annual goal for enlisted special warfare combatant crewman recruits for the first time in fiscal year 2005. To meet these recruiting goals, the military services have offered enlistment bonuses to enlist a sufficient number of new recruits. Collectively, the military services paid more than $28 million in these bonuses during fiscal year 2005 to enlist servicemembers in their special operations forces occupational specialties. Beginning in fiscal year 2003, the Army offered these bonuses to its initial accession Special Forces recruits and in fiscal year 2005 the Army paid up to $20,000 per soldier. Similarly, in fiscal year 2005, the Air Force offered enlistment bonuses of up to $10,000 to recruits in the combat controller and pararescue occupational specialties. In fiscal year 2005, the Navy paid enlistment bonuses for enlisted SEAL and special warfare combatant crewman recruits up to a maximum of $15,000. The Army met or exceeded its recruiting goals for active duty enlisted Special Forces soldiers in 5 out of the 6 years between fiscal years 2000 and 2005. From fiscal year 2000 through fiscal year 2005, the Air Force increased the number of enlisted airmen recruits for the combat controller and pararescue occupational specialties by about 400 percent and 60 percent, respectively. In fiscal year 2005, the Navy exceeded its recruiting goal for enlisted special warfare combatant crewmen. However, while the Navy met its recruiting goal for enlisted SEALs for fiscal year 2004, it met 80 percent of its recruiting goal in fiscal year 2005. The Special Operations Command and the service components have taken several actions to train greater numbers of special operations forces recruits. For instance, the Command and the service components have increased the number of instructors at several special operations forces schools to produce a larger number of newly trained personnel, with additional increases in the number of instructors planned through fiscal year 2011. The U.S. Army Special Operations Command, for example, hired 45 additional civilian instructors in fiscal year 2004 as part of its Institutional Training Expansion program, and plans to add more than 300 additional civilian instructors through fiscal year 2011. Similarly, beginning in fiscal year 2006, the Naval Special Warfare Command plans to add 145 military and civilian instructors through fiscal year 2008. The Special Operations Command’s service components have also expanded the capacity of some schools to train more students and have reorganized some of their curricula so that their recruits move through the training programs more efficiently. Beginning in fiscal year 2006, the U.S. Army Special Operations Command increased the frequency of a phase of its Special Forces qualification training that is focused on core battle skills. The U.S. Army Special Operations Command plans to increase the frequency of this phase from starting four courses per year, to starting a new course approximately every 2 weeks. This increase in frequency will expand the capacity of the training course from 1,800 student spaces to about 2,300 per year. The Air Force Special Operations Command established a training program in fiscal year 2001 to provide advanced skills training for combat controllers. In addition, the training program was intended to provide standardized training for special operations pararescue personnel, special operations combat weathermen, and special tactics officers. Since its inception, the program has increased the graduation rate of combat controllers, and in addition, the training program has provided special operations pararescue airmen, combat weathermen, and special tactics officers with advanced special operations training. In fiscal year 2005, the Naval Special Warfare Command reorganized the training course for SEALs intended to reduce student attrition. Specifically, the Naval Special Warfare Command eliminated the class administered during the winter months, which historically had the highest attrition, while increasing the class sizes for the remaining classes. In addition, the Naval Special Warfare Command has begun providing focused training for those students who have completed the most physically challenging portion of the training but who require additional practice in specific skills, rather than requiring students to begin the training from the start. In some cases, the Special Operations Command and the service components have increased the number of newly trained special operations forces personnel. From fiscal year 2000 through fiscal year 2005, for example, the school that trains new Special Forces soldiers increased the number of active duty enlisted graduates by 138 percent, or 458 additional Special Forces soldiers. DOD has also taken action to retain experienced special operations forces personnel in order to meet the planned growth in these forces. According to the Special Operations Command, it cannot accomplish planned growth solely by adding new special operations forces personnel. Rather, the growth must be accomplished by balancing an increase in the number of new personnel with the retention of experienced special operations forces servicemembers. In 2004, DOD authorized a set of financial incentives to retain experienced special operations forces personnel. These incentives include reenlistment bonuses of up to $150,000 for personnel in several special operations forces occupational specialties with 19 or more years of experience who reenlist for an additional 6 years. The military services spent more than $41 million in fiscal year 2005 to retain 688 special operations forces servicemembers with this reenlistment bonus, according to data provided by the Office of the Secretary of Defense for Personnel and Readiness. Additionally, DOD authorized increases in special pays for warfighters assigned to the Special Operations Command, for some special operations forces personnel who remain on active duty with more than 25 years of experience, and bonuses for new Special Forces and Naval Special Warfare warrant officers. While the military services and the Special Operations Command have taken steps to increase the number of newly trained special operations forces personnel and to retain its experienced operators, the military services and the Special Operations Command face several human capital challenges in fully meeting planned growth in special operations forces. These challenges include persistently low personnel inventory levels for many special operations forces occupational specialties and insufficient numbers of new graduates in some cases to meet current authorized personnel levels or planned growth targets. We reported in November 2005 that DOD faced significant challenges in recruiting and retaining servicemembers, and that the military services were unable to meet authorized personnel levels for certain occupational specialties, including several special operations forces occupational specialties. At that time, we reported that several of these specialties in the Army, Air Force, Navy, and Marine Corps were underfilled for 5 out of the previous 6 fiscal years. Such occupational specialties included active duty enlisted Army Special Forces assistant operations and intelligence sergeants and Special Forces medical sergeants, enlisted Navy SEALs and special warfare combatant crewmen, and enlisted Air Force combat controllers and pararescue personnel. According to DOD officials, the special operations forces occupational specialties were underfilled for several reasons, including extensive training or qualification requirements and recent increases in the number of authorized personnel positions. Our analysis of the personnel inventory levels for the special operations forces active component occupational specialties identified by the Special Operations Command’s Directive 600-7 shows that hundreds of authorized positions for special operations forces personnel within each of the Command’s service components have been persistently unfilled. As shown in table 3, from fiscal year 2000 through fiscal year 2005, 74 percent to 87 percent of the active component occupational specialties in this directive were underfilled, each year, by an amount ranging from less than 5 percent to more than 86 percent. In fiscal year 2005, more than 50 percent of these specialties were underfilled by at least 10 percent. For example: personnel authorizations for active duty enlisted Special Forces assistant operations and intelligence sergeants were underfilled by 58 percent, personnel authorizations for active duty enlisted pararescue airmen were underfilled by 27 percent, and personnel authorizations for active duty enlisted SEALs were underfilled by 14 percent. Given the military services’ inability to fill current and past positions in their special operations forces specialties, it may be increasingly difficult to meet DOD’s plan to increase the number of special operations forces through fiscal year 2011. During our review, the Special Operations Command’s service components provided data indicating that, in several cases, the measures the military services and the Special Operations Command are taking to recruit and train greater numbers of special operations forces personnel may enable the military services and the Command to meet the increases in the numbers of authorized positions. However, the data also show that some of the special operations forces specialties that are currently underfilled are likely to remain so after additional authorizations have been added. For example, Navy officials told us that although additional authorizations for enlisted SEALs will be added by fiscal year 2008, it will not be able to fill all of these positions until at least 2011, at the earliest. Similarly, the Air Force projects that the additional active duty enlisted combat controller positions that have been added in fiscal year 2006 will remain underfilled through at least fiscal year 2008. Not only do current low personnel inventory levels suggest that the military services and the Special Operations Command will be challenged to meet planned growth goals, but officials told us that low personnel levels in certain occupational specialties have created challenges at the unit level as well. For example, officials from the U.S. Army Special Operations Command told us that low personnel inventories of Special Forces warrant officers and medical sergeants have resulted in their having fewer numbers of these personnel per unit, which has limited the manner in which some Special Forces units have deployed on the battlefield. Similarly, the low personnel inventory levels in the Air Force combat controller and pararescue occupational specialties have resulted in the Air Force’s special tactics squadrons being underfilled. According to Air Force officials, the low personnel inventory levels in these units have increased the frequency of personnel deployments, which has had an impact on the amount of time available to conduct training and has adversely affected retention. One reason that personnel inventory levels have been low in several special operations forces occupational specialties is the schools that train new special operations forces personnel have not graduated a sufficient number of these personnel, in some cases, to meet authorized personnel levels. Furthermore, the number of newly trained personnel in several special operations forces specialties has been insufficient to meet planned growth targets. For example: The U.S. Army Special Operations Command is not graduating enough new pilots for the 160th Special Operations Aviation Regiment to meet future growth targets. In fiscal year 2005, the Command graduated only 58 percent of the MH-47 Chinook helicopter pilots and 47 percent of the MH- 60 Blackhawk helicopter pilots that the Army determined were needed to meet planned growth for this unit. According to Army officials, the capacity of the school that trains new pilots has been insufficient to meet the requirements for future personnel levels. Officials stated that the Special Operations Command has provided additional funding beginning in fiscal year 2006 for the school to hire a greater number of instructors, which will increase the capacity of the school to train these pilots. The Air Force has not produced a sufficient number of active duty enlisted special tactics personnel, such as combat controllers and pararescue personnel. For example, from fiscal year 2000 through fiscal year 2005, the Air Force trained only 53 percent of the active duty enlisted combat controllers and 40 percent of the active duty enlisted pararescue airmen needed to meet authorized personnel levels. Air Force officials stated that several constraints have limited the number of students who could attend the schools that train these personnel. Officials explained the Air Force has taken steps to increase the number of personnel that will graduate from its special tactics training programs. For example, in August 2005, the Air Force began construction on a new classroom and aquatic facility to train greater numbers of combat controllers, and it recently opened a new combat dive course to meet both combat controller and pararescue training requirements. Such measures are intended to reduce the constraints on the ability of the Air Force to train new special tactics personnel. From fiscal year 2000 through fiscal year 2005, the Naval Special Warfare Command did not produce an adequate number of enlisted SEALs to sustain authorized personnel levels. While the Naval Special Warfare Command needed to graduate 200 new enlisted SEALs each year to meet authorized personnel levels, only about 150 new enlisted personnel graduated each year during this period. In addition, Navy officials stated that to meet the planned growth for SEALs, the Naval Special Warfare Command must produce 250 enlisted SEALs annually. According to Navy officials, it has recruited an insufficient number of enlisted candidates who could successfully pass the physical test to qualify for SEAL training. As a result, the Navy has not filled the SEAL school to capacity each year, and this in turn has resulted in insufficient numbers of graduates to fill the requirements for enlisted SEALs. According to officials, the Navy began to implement several measures in January 2006 that, in part, are intended to increase the quantity and quality of enlisted recruits entering SEAL training, thereby improving the chances that more of these recruits will successfully graduate from the training. The Special Operations Command does not have complete information, including measurable performance objectives and goals, to evaluate the progress that the Command’s service components have made in meeting the human capital challenges that could impede the Command’s ability to achieve planned growth. The Special Operations Command has an established program through which it monitors the status of its personnel. The goal of the program is to ensure there are sufficient numbers of special operations forces personnel to meet current and future mission requirements. The implementing directive requires the special operations component commanders to provide the Special Operations Command with annual reports that contain data on several topics related to the human capital management of special operations forces, including personnel inventory levels, accession plans, reenlistments and loss management programs, and military education opportunities for special operations forces officers. Command officials told us they use these reports to monitor the status of special operations forces. Our analysis of the service components’ annual reports for fiscal years 2000 through 2005 shows that the reports provide some of the information required by the directive, such as information on personnel inventory levels and professional military education opportunities. However, the reports have not provided information for several key requirements called for by the directive that would provide information on the service components’ progress in meeting the planned growth targets. For example, the service components are required to provide accession plans for several of the special operations occupational specialties, including Army Special Forces, Navy SEALs, and Air Force special tactics personnel. The accession plans should provide detailed information on the number of new accessions for initial training and projections for the following year. Our review of the annual reports shows that since fiscal year 2003, none of the service components’ submissions contained this information. Additionally, the directive requires the service components to provide detailed analyses to support each category discussed in the annual report, including trends developed over recent years and predictions for the future. Further, the annual reports should fully discuss any concerns by describing the concern in context, providing past actions taken to resolve the concern, and presenting recommendations to address the concern in the future. However, our analysis of the components’ annual submissions shows that the reports have often failed to provide detailed analyses of their human capital challenges and the corrective actions that should be taken to address these challenges. For instance: The U.S. Army Special Operations Command’s annual report for fiscal year 2005 did not identify a 79 percent personnel fill rate for the Special Forces medical sergeant occupational specialty as a challenge. However, officials with whom we spoke indicated that insufficient numbers of these personnel have limited both the operational capabilities of some deployed Special Forces units and the ability to provide medical life-support to personnel in these units. In other cases, the U.S. Army Special Operations Command’s annual reports identified challenges but did not propose corrective actions. For example, the report for fiscal year 2005 states a concern that, because the 160th Special Operations Aviation Regiment had insufficient training resources, it produced only 50 percent of the requirement for MH-47 Chinook helicopter pilots. However, the report did not discuss in detail what actions should be taken to address this challenge. Since its fiscal year 2000 annual report, the Air Force Special Operations Command has identified a concern that the experience level of its rated pilots has been decreasing. As a result, there have been an insufficient number of aircraft commanders and instructor pilots within several of the special operations squadrons. However, the Air Force Special Operations Command’s annual reports do not contain any information to support the specific decrease in the number of experienced pilots in its special operations forces units. Moreover, the reports do not specify how the actions taken to address the issue have impacted the level of experience of pilots, or what further actions are needed to address this challenge. In addition, although the combat controller and pararescue occupational specialties have been underfilled since at least fiscal year 2000, the Air Force’s annual reports have not provided detailed information on the specific actions that should be taken to overcome the challenges of low personnel inventory levels in these specialties. The Naval Special Warfare Command’s annual reports have consistently identified a critical challenge regarding the insufficient number of new enlisted Navy SEALs who have graduated from the school each year. Further, the reports provide some information on the actions taken in the previous fiscal year to address this concern. However, the annual reports have not included detailed information on the Naval Special Warfare Command’s accession plans, or the effects that recruit shortfalls have had on personnel inventory levels, which are specifically required by the directive. Furthermore, the service components’ annual reports lack performance objectives and goals that link key personnel data with future growth plans and assessments of personnel needs. Our prior work has shown that high- performing organizations use relevant and reliable data to determine performance objectives and goals that enable them to evaluate the success of their human capital approaches. These organizations identify current and future human capital needs, including the appropriate number of employees, the key competencies and skills mix for mission accomplishment, and the appropriate deployment of staff across the organization, and then create strategies for identifying and filling gaps. However, our analysis of the Command’s Directive 600-7 shows that the requirements for the annual reports do not include instructions for the service components to develop performance objectives, goals, and measures of progress for achieving planned growth. As an example, the Command requires the service components to provide personnel reenlistment data within these reports. Specifically, the Command requires information and analysis on the number of eligible special operations forces personnel who chose to reenlist and comparative information on the number of personnel reenlistments in each military service. However, the service components’ annual reports do not clearly link the number of experienced warfighters who have been retained with the number who are needed to meet planned growth. This is particularly important because the parent military services have not set goals for the reenlistments of their special operations forces personnel in a way that is clearly linked with the planned growth in these forces. Each of the active component military services tracks retention according to years of service and whether a servicemember is on a first, second, or subsequent enlistment. Moreover, the Special Operations Command has not established specific performance objectives or goals for the special operations forces retention initiative that DOD authorized in December 2004. As a result, it is difficult to assess the progress that DOD has had with this initiative in retaining a sufficient number of experienced personnel to meet planned growth—a key rationale for the initiative. Many of the special operations forces servicemembers who were eligible for the bonuses offered as part of this initiative did reenlist, as shown by information provided to us. However, Special Operations Command officials were unable to provide specific goals to measure the effectiveness of the retention initiatives because they lacked clear performance objectives that are linked to comprehensive analyses of personnel needs. Special Operations Command officials stated the Command had not fully enforced the reporting requirements in its directive because it is outdated and some of the information required in the annual reports is less relevant, given the Command’s expanded role in the Global War on Terrorism. However, the Command most recently updated this directive in April 2003, and at that time, it maintained the annual reporting requirements. In addition, officials stated that data and information on the status of special operations forces personnel are available to the Special Operations Command through other processes, including monthly and quarterly readiness reports, monthly personnel status summaries, and annual conferences hosted by the Command to discuss personnel issues. The Defense Manpower Data Center also provides the Command with analyses on the trends in the continuation rates of special operations forces personnel. While these processes may provide information on the status of special operations forces, they do not provide detailed analyses and discussions of concerns and corrective actions that are required by the Command’s directive. In addition, the annual reports are a means by which the Command has provided information to stakeholders within the department—including the Office of the Secretary of Defense and the military services—on the status of special operations forces. Without complete information on human capital challenges, the Special Operations Command will be unable to determine whether the service components’ human capital management approaches, including their recruiting, training, and retention strategies, will be effective in meeting the planned growth targets. Since fiscal year 2000, special operations forces have experienced a substantial increase in the deployment of personnel for operations and a simultaneous decrease in the deployment of personnel for training. To its credit, the Special Operations Command has taken action to manage the challenge of increased deployments by establishing a policy intended to maintain the readiness, retention, and training of special operations forces personnel. However, the Command’s service components have not yet consistently or fully implemented this policy. The Special Operations Command Directive 525-1 establishes the Command’s policy to collect and monitor information on the deployments of special operations forces personnel. Accordingly, the Command gathers deployment information on a weekly basis from the service components and the geographic combatant commands. These reports include information on the number of special operations forces personnel and special operations forces units that are deployed around the world. In addition, the components report the type of the deployment, such as deployments for operations or for training. From these weekly updates, the Special Operations Command develops a comprehensive deployed forces report, which is presented to the Commander of the Special Operations Command and included in updates for the Chairman of the Joint Chiefs of Staff. Our review of Special Operations Command data shows that since fiscal year 2000, deployments of special operations forces personnel have substantially increased. Specifically, as shown in figure 2, the average weekly number of deployed special operations forces personnel was 64 percent, or about 3,100 personnel, greater in fiscal year 2005 than in fiscal year 2000. Our analysis also shows that the vast majority of recent deployments outside of the United States were to the Central Command area of responsibility, which accounted for 85 percent of deployed special operations forces in fiscal year 2005. Significantly, more than 99 percent of these deployments supported ongoing combat operations. In contrast, in fiscal year 2000, only 20 percent of special operations forces deployments were to the Central Command. As shown in figure 3, the percentage of special operations forces personnel deployed to the European Command, the Pacific Command, and the Southern Command decreased between fiscal year 2000 and fiscal year 2005. While special operations forces have experienced a substantial increase in deployments for operations, there has been a simultaneous decrease in deployments for training. As shown in table 4, from fiscal year 2000 through fiscal year 2005, the percentage of special operations forces personnel deployed for operations increased, while the percentage of personnel deployed for training decreased. The decrease in deployments for training appears to have had at least two effects. From fiscal year 2000 through fiscal year 2005, for example, the amount of time for which special operations forces deployed for training to maintain proficiency in battle skills decreased by 50 percent. Officials with the Army, Navy, and Air Force service components told us that since many of their units have been deployed to the Central Command area of responsibility, they have had fewer opportunities to conduct proficiency training for all mission tasks. As a result, special operations forces units are focusing their training on the tasks that are required for operations in the Central Command and are assuming some risk by not training for other mission tasks. For example, officials with the U.S. Army Special Operations Command told us that specialized training such as military free fall and underwater combat operations have been reduced to a minimum, since these skills are not required to support ongoing operations. Similarly, officials with the Air Force Special Operations Command stated that increased deployments for operations had affected the ability of its air crews and special tactics squadrons to achieve all required mission- essential training. However, officials stated that this has not degraded overall readiness, because not all of these training tasks are currently being performed in the Central Command. In addition, officials stated that if mission priorities were to shift away from the Central Command and different missions needed to be performed, not all of its special operations forces personnel would be required to have achieved those training tasks in order for a mission to be successfully carried out. Additionally, although our analysis shows that special operations forces deployed less frequently for skills proficiency training from fiscal year 2000 through fiscal year 2005, we were told that the amount of training that special operations forces accomplished may not have been greatly affected. In particular, we were told that Army special operations forces units do not necessarily have to deploy in order to accomplish training that can be done at their home station. In addition, the fact that many special operations forces units are deploying for combat operations results in ample opportunities to maintain proficiency in essential skills. Officials with the U.S. Army Special Operations Command explained that special operations forces no longer train to fight because they are training as they fight. However, not all special operations forces can accomplish training tasks at their home station. According to Naval Special Warfare Command officials, Naval Special Warfare units do not have adequate home station training ranges and are required to deploy in order achieve most training tasks. Yet, from fiscal year 2000 to fiscal year 2005, the amount of time that Naval Special Warfare personnel deployed for skills proficiency training decreased by more than 30 percent. Special operations forces have also deployed less frequently to train with foreign military forces overseas. As we have previously reported, this type of training is important because it enables special operations forces to practice mission skills such as providing military instruction in a foreign language and maintaining language proficiency and familiarity with local geography and cultures, which are essential in the foreign internal defense and unconventional warfare missions. These deployments of special operations forces to train with the armed forces and other security forces of friendly foreign countries are commonly referred to as joint combined exchange training. Between fiscal year 2000 and fiscal year 2005, however, the amount of time in which special operations forces personnel deployed for joint combined exchange training decreased by 53 percent. Our analysis of DOD data reported to the Congress also shows the participation of special operations forces in joint combined exchange training events decreased since fiscal year 2000. As shown in figure 4, from fiscal year 2000 through fiscal year 2005, the number of these events that special operations forces completed decreased by about 50 percent. Further analysis shows that the number of events conducted in most of the geographic combatant command areas of responsibility decreased from fiscal year 2000 through fiscal year 2005. Specifically, joint combined exchange training events conducted in the European Command decreased by about 75 percent, while events conducted in the Southern Command and Pacific Command also decreased during this time. Conversely, the number of such training events conducted in the Central Command increased from 7 exercises in fiscal year 2000 to 14 exercises in fiscal year 2005. The increase in the amount of time that special operations forces have deployed to support operations in the Central Command has, to some extent, resulted in an increase in the number of cancelled joint combined exchange training events. Officials with the Special Operations Command, European Command, Pacific Command, and Southern Command with whom we spoke stated that joint combined exchange training can be cancelled for various reasons, including the availability of funding for the training, the availability of host nation forces, or the operations tempo of U.S. special operations forces. Officials stated, however, that due to the increased requirement for special operations forces deployments to support operations in the Central Command, there has been a corresponding increase in the number of cancelled joint combined exchange training events. Our analysis shows that from fiscal year 2000 through fiscal year 2005, the percentage of cancelled training events due to the operations tempo of special operations forces increased from 0 percent to more than 60 percent. While the primary purpose of joint combined exchange training is to train U.S. forces, this training can also have an ancillary benefit in that it can be used by the geographic combatant commanders and ambassadors to fulfill regional and country engagement objectives. For instance, the geographic combatant commands use joint combined exchange training to help achieve foreign engagement objectives in their designated areas of responsibility. DOD documents regarding the department’s strategy for the Global War on Terrorism identify combined training, such as joint combined exchange training, as an important element to strengthen partner nations’ counterterrorism capabilities. However, with continuing support being required for operations in the Central Command’s area of responsibility, there have been fewer special operations forces available to execute these types of training activities. The Special Operations Command has taken action to manage the challenge of increased personnel deployments. Monitoring the status of personnel deployments has been an area of congressional and DOD concern. The management of personnel tempo is important to the quality of life and retention of military personnel. Section 991 of Title 10 of the U.S. Code states that the deployment (or potential deployment) of a member of the Armed Forces shall be managed. Moreover, DOD has recognized that failure to effectively manage personnel tempo can result in the continued loss of trained personnel, a consequent loss of readiness capability, and an increased recruiting challenge. In addition, we have previously reported that high personnel tempo for special operations forces can affect readiness, retention, and morale. In August 2005, the Special Operations Command established a policy intended to maintain the readiness, retention, and training of active duty special operations forces personnel. The policy requires the Command’s active duty personnel to remain at least an equal amount of time at their home station as they do deployed for operations and training. The policy also requires that the Special Operations Command’s service components develop internal tracking mechanisms to ensure that their active duty special operations forces personnel remain within the policy’s deployment requirements. However, the Command’s service components have not consistently or fully implemented the deployment policy. One challenge lies in the fact that the policy’s guidelines are not clear. Officials with the Command’s service components noted a lack of clear guidance regarding how the components should implement the deployment guidelines, and consequently they were implementing it differently from one another. For example, the policy does not identify the length of time for which the components must ensure that personnel remain within the deployment guidelines. In addition, it does not state whether a servicemember must remain at a home station immediately following one deployment for an equal amount of time before a next deployment. Because of the lack of clear guidance, the Special Operations Command’s service components have had to interpret the intent of the policy’s requirements to ensure that their personnel remain in compliance. A second challenge lies in the difficulty of achieving full implementation. Officials with the Naval Special Warfare Command stated that they have been unable to comply with the deployment guidelines because personnel lack adequate home station training ranges. Specifically, Naval Special Warfare personnel must deploy for both unit training and operations. This combination of deployments has resulted in personnel having exceeded the policy’s requirement. Naval Special Warfare Command officials indicated that they were working with the Special Operations Command and the Navy to implement the deployment policy. According to Navy officials, the Navy plans to provide the Naval Special Warfare Command with additional funds to improve the home station ranges used to train the SEAL force, which is anticipated to reduce the current pace of operations tempo due to deployments for training. However, because these personnel have been required to deploy for most unit training, they have been unable to comply with the policy’s requirement. To determine whether special operations forces are meeting the intent of the policy requires the service components to maintain internal tracking systems with complete, valid, and reliable data on their personnel deployments. However, officials with the Command’s Army and Navy components expressed concerns regarding the reliability of the information they use to track the individual deployments of their personnel. While we did not independently validate the reliability of the data for personnel deployments, an official with the U.S. Army Special Operations Command stated the Army did not have a high level of confidence in the data recorded by the U.S. Army Special Operations Command’s units in the Army’s system on personnel deployments. Officials told us that they are developing a separate internal management tool in order to fully comply with the deployment policy; however, that tool will not be ready until July 2006. Naval Special Warfare Command officials told us that comprehensive reporting of personnel tempo information was suspended after the onset of the Global War on Terrorism. The reporting of this information was suspended because the Naval Special Warfare Command could not meet the Navy’s personnel tempo standards due to an increase in the pace of deployments in support of ongoing operations. As a result, the Naval Special Warfare Command does not have comprehensive and reliable data on Naval Special Warfare personnel deployments. Officials stated that the Naval Special Warfare Command was in the process of reestablishing personnel tempo reporting with a goal of full reporting for all units by the end of April 2006. Without consistent and reliable data, the Special Operations Command does not have the information it needs to effectively manage the personnel deployments of special operations forces, which affects the Command’s ability to maintain the readiness, retention, and training of special operations forces personnel. The decision by DOD to expand the responsibilities of the Special Operations Command in the Global War on Terrorism has created new challenges to determine personnel requirements and acquire, train, and equip a greater number of warfighters to support ongoing military operations. The Congress and DOD have provided resources to enable the Command to augment its personnel. Given the Command’s expanded mission, however, it is critical that the Command complete its analyses of personnel requirements and fully determine the number of personnel, who possess the right knowledge and skill sets, for the Command to meet its new role. Without this information, the Command cannot reasonably assure the Secretary of Defense and the Congress that the currently planned growth in the number of personnel for the Command will meet, exceed, or fall short of the requirements necessary to carry out its expanded mission. The military services and the Special Operations Command have faced human capital challenges in recruiting, training, and retaining a sufficient number of these forces, and many of these challenges continue. In large part, these challenges are attributable to the rigorous selection and training processes set for these personnel. Nonetheless, we believe the Command would be better able to address these challenges if it had a clearer understanding of the progress its service components have made in achieving planned growth, which is clearly linked with appropriate goals and measures. Furthermore, the Command is attempting to meet its growth goals at a time of heightened personnel deployments. However, the Command is managing these deployments without reliable data. Such information would further enable the Command to meet the full range of its missions while maintaining the readiness, retention, and training of its personnel. We recommend that the Secretary of Defense direct the Commander, U.S. Special Operations Command, to 1. establish specific milestones for completing the Command’s ongoing analyses of personnel requirements and, once completed, make any needed adjustments to the current plans for personnel increases for the Command’s headquarters and related future funding requests; 2. revise the Command’s directive for its program to monitor the status of special operations forces to include performance objectives, goals, and measures of progress for achieving planned growth; and enforce all of the directive’s reporting requirements; and 3. clarify the methodology that the Command’s service components should use for enforcing the deployment policy, and take steps to ensure that the service components have tracking systems in place that utilize reliable data to meet the requirements of the policy. In written comments on a draft of this report, DOD concurred with one recommendation and partially concurred with our two remaining recommendations. DOD’s comments are included in appendix III. DOD also provided technical comments, which we incorporated into the report, as appropriate. DOD partially concurred with our recommendation to require the Special Operations Command to establish specific milestones for completing its ongoing analyses of personnel requirements and, once completed, make any needed adjustments to the current plans for personnel increases for the Command’s headquarters in related future funding requests. DOD stated that the personnel requirements for the Command’s headquarters are being determined by an extensive study scheduled for completion in March 2007. DOD stated that it will monitor the progress and validate the results of this study, which we believe to be important steps. However, as we noted in this report, DOD has already requested funding to substantially increase the number of military and civilian positions at the Command’s headquarters beginning in fiscal year 2007, without the benefit of the results from the Command’s study of personnel needs. As a result, we would expect DOD to re-evaluate its funding needs upon completion of the Command’s study, and adjust its requests accordingly. DOD concurred with our recommendation to require the Special Operations Command to revise the Command’s directive for its program to monitor the status of special operations forces, to include performance objectives, goals, and measures of progress for achieving planned growth, and enforce all of the directive’s reporting requirements. DOD stated that the Special Operations Command is updating the directive for its program to monitor the status of special operations forces, and that the department and the Command are continuously developing new tools and metrics to more accurately measure the actual health of special operations forces. DOD further stated that it is difficult to compare personnel data across the services because each of the Command’s service components presents data using the metrics of its parent service, adding that it is highly desirable to have each component format its service-derived data in a common database. While we recognize the military services have different metrics, the intent of our recommendation is that the Special Operations Command develop a set of reporting metrics that would give the Command the data it needs to monitor progress in meeting growth goals. Finally, DOD partially concurred with our recommendation to require the Special Operations Command to clarify the methodology that its service components use for enforcing the Command’s deployment policy, and take steps to ensure that the service components have tracking systems in place that utilize reliable data to meet the requirements of the policy. DOD stated that the Special Operations Command leadership and all of its service components have implemented the Command’s deployment policy, which is in compliance with the department’s force deployment rules for Operations Iraqi Freedom and Enduring Freedom. In addition, DOD stated that the department will work toward developing a multi-service database and metrics to standardize deployment and other metrics across the joint community to overcome the challenge associated with the fact that each service uses different metrics for calculating deployment time. While we recognize the use of different metrics presents a challenge, our point, as we state in this report, is that the Command’s policy is unclear concerning the length of time for which the components must ensure that personnel remain within the deployment guidelines, and whether a servicemember must remain at a home station immediately following one deployment for an equal amount of time prior to a subsequent deployment. As a result, the Command’s service components have interpreted the intent of the policy’s requirements inconsistently. We continue to believe that additional clarification to the Command’s deployment policy is warranted to assist its service components in ensuring that special operations forces personnel remain in compliance with this policy. We also believe that the planned actions to standardize deployment and other metrics should include establishing procedures for recording reliable and relevant data on personnel deployments since, as we reported, officials with two of the Special Operations Command’s service components did not have confidence in the reliability of the information that was used to track the individual deployments of their special operations forces personnel. Such data are an important tool to enable the Command to maintain the readiness, retention, and training of special operations forces personnel. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Secretary of Defense, the Secretary of the Army, the Secretary of the Air Force, the Secretary of the Navy, the Commandant of the Marine Corps, and the Commander, United States Special Operations Command. We will make copies available to others upon request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To assess the extent to which the Special Operations Command (Command) has identified all of the personnel requirements needed to meet its expanded mission, we identified the Joint Mission Analysis process and the Command’s formal manpower studies as the primary processes in which the Command develops its force structure and personnel requirements. To assess the plans to increase the number of special operations forces units and personnel requirements for the Command’s headquarters, we conducted site visits and interviewed officials involved with determining personnel requirements with the Special Operations Command, and the Army, Navy, and Air Force service components. We also met with Marine Corps officials to discuss plans for growth in Marine Corps special operations forces. We analyzed the plans for growth in these personnel through fiscal year 2011. We reviewed Department of Defense (DOD) documents identifying the increases in the Special Operations Command’s military authorizations and funding since fiscal year 2000 and its plans for personnel growth through fiscal year 2011. We reviewed past reports prepared by GAO that discuss effective strategies for workforce planning. However, we were unable to determine whether all of the Special Operations Command’s personnel requirements had been identified because, at the time of our review, the Command had not completed all of its analyses of the personnel requirements needed for its expanded mission responsibilities. To assess the progress the military services and the Special Operations Command have made since fiscal year 2000 in increasing the number of special operations forces personnel, we discussed the processes used by the military services and DOD to recruit, train, and retain these forces with officials from the Office of the Secretary of Defense, the Special Operations Command, and the military services. We focused on these processes for the active components of the military services. To determine what challenges the military services and the Special Operations Command face to meet future growth, we analyzed personnel inventory levels for special operations forces in the active component military services for fiscal years 2000 through 2005. We collected and analyzed data to determine whether the schools that train new special operations personnel are producing enough newly trained personnel in order to meet current authorized personnel levels or planned growth targets. We reviewed relevant Special Operations Command directives and analyzed annual reports prepared by the service components to determine the extent to which the information in these reports met reporting requirements. To assess the effect of increased special operations forces deployments, we analyzed deployment data from the Special Operations Command for fiscal years 2000 through 2005. We analyzed the trends in deployments for operations, training, and administrative activities and the trends in deployments by geographic region. We discussed the impact of decreased deployments for training and increased deployments for operations with officials from the military services and the Special Operations Command. We reviewed the Special Operations Command’s policy to manage special operations forces personnel deployments and conducted interviews with component command officials to determine their ability to implement and fully comply with this policy. We reviewed available data for inconsistencies. Our assessments of data reliability revealed some concerns which are discussed in this report. Specifically, some of the personnel inventory data provided by the military service headquarters were incomplete. To overcome this challenge, we gathered additional information from the Special Operations Command’s service components. In addition, we interviewed officials with the service headquarters and the Special Operations Command’s service components who were knowledgeable about the data to discuss the validity of the information provided to us. We concluded the data were sufficiently reliable to answer our objectives. We conducted our review from April 2005 through June 2006 in accordance with generally accepted government auditing standards. We interviewed officials and obtained documentation at the following locations: U.S. Army Headquarters, Washington, D.C. U.S. Army Reserve Command, Ft. McPherson, Georgia U.S. Army Special Operations Command, Fort Bragg, North Carolina Chief of Naval Operations, Arlington, Virginia Naval Recruiting Command, Millington, Tennessee Naval Special Warfare Command, Coronado, California U.S. Marine Corps Headquarters, Washington, D.C. U.S. Air Force Headquarters, Washington, D.C. Air Education and Training Command, Randolph Air Force Base, Texas Air Force Special Operations Command, Hurlburt Field, Florida Office of the Secretary of Defense Office of the Secretary of Defense (Comptroller), Washington, D.C. Office of the Secretary of Defense (Personnel and Readiness), Washington, D.C. Office of the Secretary of Defense (Special Operations and Low Intensity Conflict), Washington, D.C. Section 167(j) of Title 10, U.S. Code lists 10 activities over which the Special Operations Command exercises authority insofar as they relate to special operations. Table 5 defines these activities. In addition to the contact named above, David Moser, Assistant Director; John Pendleton, Assistant Director; Colin Chambers, Jeremy Manion, Stephanie Moriarty, Joseph Rutecki, Christopher Turner, Matthew Ullengren, Cheryl Weissman, and Gerald Winterlin also made key contributions to this report.
Since the Global War on Terrorism, the Department of Defense (DOD) has taken steps to expand the role of the United States Special Operations Command (Command) and its forces. In response, the Command has transformed its headquarters to coordinate counterterrorism activities, and DOD has increased funding and the number of special operations forces positions. Given the expanded mission, it is critical that the Command has personnel with the right knowledge and skill sets. GAO was asked to assess: (1) whether the Command has determined all of the personnel requirements needed to meet its expanded role; (2) the progress and challenges in meeting growth goals; and (3) any effect of deployments on the Command's ability to provide trained forces, and the progress made in managing deployments. GAO performed its work at the Special Operations Command and its service components, analyzed personnel data against requirements, and examined policies and directives. Although DOD plans to significantly increase the number of special operations forces personnel, the Special Operations Command has not yet fully determined all of the personnel requirements needed to meet its expanded mission. While it has determined the number of personnel needed to increase its number of warfighter units, it has not completed analyses to determine (a) how many headquarters staff are needed to train and equip these additional warfighters or (b) how many headquarters staff are needed to plan and synchronize global actions against terrorist networks--a new mission for the Command. DOD plans to begin increasing the number of headquarters positions and has requested funds for these positions in its fiscal year 2007 budget request. Until these analyses are completed, the Special Operations Command cannot provide assurances to the Secretary of Defense and the Congress that currently planned growth in the number of personnel for the Command's headquarters will meet, exceed, or fall short of the requirements needed to address the Command's expanded mission. The military services and the Special Operations Command have made progress since fiscal year 2000 in recruiting, training, and retaining special operations forces personnel, but they must overcome persistently low personnel inventory levels and insufficient numbers of newly trained personnel, in certain specialties, to meet DOD's plan to increase the number of special operations forces. In addition, GAO's review of the service components' annual reports required by the Special Operations Command shows that the reports have not provided the information needed to determine whether they have enough personnel to meet current and future requirements. Without such information, the Command will be unable to determine whether the service components' human capital management approaches, including recruiting, training, and retention strategies, will be effective in meeting the planned growth targets. Since fiscal year 2000, the number of special operations forces personnel deployed for operations has greatly increased, and the number deployed for training has simultaneously decreased. The Special Operations Command has taken action to manage the challenge of increased deployments; in August 2005, it began requiring active duty personnel to remain at least an equal amount of time at home as deployed. But the Command's service components have not consistently or fully implemented this policy. This is because the policy lacks clear guidance on the length of time that the components must ensure that personnel remain within the deployment policy guidelines. In addition, officials with the Command's Army and Navy service components expressed concerns regarding the reliability of their information required to track the deployments of their personnel. Without consistent and reliable data, the Special Operations Command does not have the information it needs to effectively manage the personnel deployments of special operations forces, which affects its ability to maintain the readiness, retention, and training of these personnel.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and memorials. According to VA, its employees maintain the largest integrated health care system in the nation for approximately 6 million patients, provide compensation and benefits for about 4 million veterans and beneficiaries, and maintain about 3 million gravesites at 164 properties. The use of information technology (IT) is crucial to the department’s ability to provide these benefits and services, but without adequate protections, VA’s systems and information are vulnerable to those with malicious intentions who wish to exploit the information. The evolving array of cyber-based threats can jeopardize the confidentiality, integrity, and availability of federal information systems and the information they contain. These threats can be unintentional or intentional. Unintentional threats can be caused by natural disasters; defective equipment; or the actions of careless, inattentive, or untrained employees that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks from a variety of sources. These include disgruntled employees, criminal groups, hackers, and foreign nations engaged in espionage and information warfare. Such threat sources vary in terms of the types and capabilities of the actors, their willingness to act, and their motives. These threat sources make use of various techniques to compromise information or adversely affect computers, software, networks, an organization’s operation, an industry, or the Internet itself. Such techniques include, among others, denial-of-service attacks and malicious software codes or programs. The unique nature of cyber-based attacks can vastly enhance their reach and impact, resulting in the loss of sensitive information and damage to economic and national security, the loss of privacy, identity theft, and the compromise of proprietary information or intellectual property. The increasing number of incidents reported by federal agencies has further underscored the need to manage and bolster the security of the government’s information systems. The number of incidents affecting VA’s information, computer systems, and networks has generally risen over the last several years. Specifically, in fiscal year 2007, the department reported 4,834 information security incidents to US-CERT; in fiscal year 2013, it reported 11,382 incidents. These included incidents related to unauthorized access, denial-of- service attacks; installation of malicious code; improper usage of computing resources; and scans, probes, and attempted access, among others. Figure 1 shows the overall increase in the total number of incidents VA reported to US-CERT for fiscal year 2007 through 2013. In addition, reports of incidents affecting VA’s systems and information highlight the serious impact that inadequate information security can have on, among other things, the confidentiality, integrity, and availability of veterans’ personal information. For example: According to a VA official, in January 2014 a software defect in VA’s eBenefits system improperly allowed users to view the personal information of other veterans. According to this official, this defect potentially allowed almost 5,400 users to view data of over 1,300 veterans and/or their dependents. In May 2010, it was reported that VA officials had notified lawmakers of breaches involving the personal data of thousands of veterans, which had resulted from the theft of an unencrypted laptop computer from a VA contractor and a separate incident at a VA facility. To help protect against threats to federal systems, the Federal Information Security Management Act of 2002 (FISMA) sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The framework creates a cycle of risk management activities necessary for an effective security program. In order to ensure the implementation of this framework, FISMA assigns specific responsibilities to agencies, the Office of Management and Budget (OMB), the National Institute of Standards and Technology (NIST), and agency inspectors general. Specifically, each agency is required to develop, document, and implement an agency-wide information security program and to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. For its part, OMB is required to develop and oversee the implementation of polices, principles, standards, and guidelines on information security in federal agencies. It is also responsible for reviewing, at least annually, and approving or disapproving agency information security programs. NIST’s responsibilities include the development of security standards and guidance. Finally, inspectors general are required to evaluate annually the information security program and practices of their agency and submit the results to OMB. Further, Congress enacted the Veterans Benefits, Health Care, and Information Technology Act of 2006that year revealed weaknesses in VA’s handling of personal information. Under the act, VA’s chief information officer is responsible for establishing, maintaining, and monitoring department-wide information security policies, procedures, control techniques, training, and inspection requirements as elements of the department’s information security program. It also reinforced the need for VA to establish and carry out the responsibilities outlined in FISMA, and included provisions to further after a serious loss of data earlier protect veterans and service members from the misuse of their sensitive personal information and to inform Congress regarding security incidents involving the loss of that information. Information security remains a long-standing challenge for the department. Specifically, VA has consistently had weaknesses in major information security control areas. For fiscal years 2007 through 2013, deficiencies were reported in each of the five major categories of information security controls as defined in our Federal Information System Controls Audit Manual. Access controls ensure that only authorized individuals can read, alter, or delete data. Configuration management controls provide assurance that only authorized software programs are implemented. Segregation of duties reduces the risk that one individual can independently perform inappropriate actions without detection. Contingency planning includes continuity of operations, which provides for the prevention of significant disruptions of computer-dependent operations. Security management includes an agency-wide information security program to provide the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. In fiscal year 2013, for the 12th year in a row, VA’s independent auditor reported that inadequate information system controls over financial systems constituted a material weakness. Specifically, the auditor noted that while VA had made improvements in some aspects of its security program, it continued to have control deficiencies in security management, access controls, configuration management, and contingency planning. In particular, the auditor identified significant technical weaknesses in databases, servers, and network devices that support transmitting financial and sensitive information between VA’s medical centers, regional offices, and data centers. According to the auditor, this was the result of an inconsistent application of vendor patches that could jeopardize the data integrity and confidentiality of VA’s financial and sensitive information. In addition, the VA OIG reported in 2013 that development of an effective information security program and system security controls continued to be a major management challenge for the department. The OIG noted that VA had taken steps to, for example, establish a program for continuous monitoring and implement standardized security controls across the enterprise. However, the OIG continued to identify weaknesses in the department’s security controls and noted that improvements were needed in key controls to prevent unauthorized access, alteration, or destruction of major applications and general support systems. These more recent findings are consistent with the challenges VA has historically faced in implementing an effective information security program. In a number of products issued beginning in 1998, we have identified wide-ranging, often recurring deficiencies in the department’s information security controls. These weaknesses existed, in part, because VA had not fully implemented key components of a comprehensive information security program. The persistence of similar weaknesses over 16 years later indicates the need for stronger, more focused management attention and action to ensure that VA fully implements a robust security program. In addition, we have recently reported on issues regarding the protection of personally identifiable information (PII) at federal agencies, including VA. In December 2013, we issued a report on our review of agency practices in responding to data breaches involving PII. determined the extent to which selected agencies had developed and implemented policies and procedures for responding to breaches involving PII. Regarding VA, we found that the department had addressed relevant management and operational practices in its data breach response policies and procedures. In addition, it had implemented its policies and procedures by preparing breach reports and performing risk assessments for cases of data breach. However, VA had not documented the rationale for all its risk determinations, documented the number of individuals affected by breaches, consistently notified individuals affected by high- risk breaches, consistently offered credit monitoring to affected individuals, or consistently documented lessons learned from PII breaches. Accordingly, we recommended that VA take specific steps to address these weaknesses. VA agreed with some, but not all, of these recommendations. We maintained that all our recommendations were warranted. GAO, Information Security: Agency Responses to Breaches of Personally Identifiable Information Need to Be More Consistent, GAO-14-34 (Washington, D.C.: Dec. 9, 2013). the computerized matching of personal information for purposes of determining eligibility for federal benefits programs. Under these amendments, agencies are required to establish formal agreements with other agencies to share data for computer matching, conduct cost-benefit analyses of such agreements, and establish data integrity boards to review and report on agency computer matching activities. Specifically regarding VA, we found that the department generally established computer matching agreements for its matching activities and conducted cost-benefit analyses of proposed matching programs. However, the completeness of these analyses varied in that they did not always include key costs and benefits needed to determine the value of a computer matching program. We noted that VA’s guidance for developing cost-benefit guidance did not call for including key elements. We recommended that VA revise its guidance on cost-benefit analyses and ensure that its data integrity board review the analyses to make sure they include cost savings information. VA concurred and described steps it would take to implement our recommendations. The Subcommittee is considering draft legislation that is intended to improve VA’s information security. The draft bill addresses governance of the department’s information security program and security controls for the department’s information systems. It requires the Secretary of Veterans Affairs to improve the transparency and coordination of the information security program and to ensure the security of the department’s critical network infrastructure, computers and servers, operating systems, and web applications, as well as its Veterans Health Information Systems and Technology Architecture system, from vulnerabilities that could affect the confidentiality of veterans’ sensitive personal information. For each of these elements of VA’s computing environment, the draft bill identifies specific security-related actions and activities that VA is required to perform. Many of the actions and activities specified in the proposed legislation are sound information security practices and consistent with federal guidelines, if implemented on a risk-based basis. FISMA requires agencies to implement policies and procedures that are based on risk assessments, cost-effectively reduce information security risks to an acceptable level, and ensure that information security is addressed throughout the life cycle of each agency information system. The provisions in the draft bill may prompt VA to refocus its efforts on actions that are necessary to improve the security over its information systems and information. In a dynamic environment where innovations in technology and business practices supplant the status quo, control activities that are appropriate today may not be appropriate in the future. Emphasizing that specific security-related actions should be taken based on risk could help ensure that VA is better able to meet the objectives outlined in the draft bill. Doing this would allow for the natural evolution of security practices as circumstances warrant and may also prevent the department from focusing exclusively on performing the specified actions in the draft bill to the detriment of performing other essential security activities. In summary, VA’s history of long-standing challenges in implementing an effective information security program has continued, with the department exhibiting weaknesses in all major categories of security controls in fiscal year 2013. These challenges have been further highlighted by recent determinations that weaknesses in information security have contributed to a material weakness in VA’s internal controls over financial reporting and continue to constitute a major management challenge for the department. While the draft legislation being considered by the Subcommittee may prod VA into taking needed corrective actions, emphasizing that these should be taken based on risk can provide the flexibility needed to respond to an ever-changing technology and business environment. Chairman Coffman, Ranking Member Kirkpatrick, and Members of the Subcommittee, this concludes my statement today. I would be happy to answer any questions you may have. If you have any questions concerning this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov or Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. Other individuals who made key contributions to this statement include Jeffrey L. Knott and Anjalique Lawrence (assistant directors), Jennifer R. Franks, Lee McCracken, and Tyler Mountjoy. Computer Matching Act: OMB and Selected Agencies Need to Ensure Consistent Implementation. GAO-14-44. Washington, D.C.: January 13, 2014. Information Security: Agency Responses to Breaches of Personally Identifiable Information Need to Be More Consistent. GAO-14-34. Washington, D.C.: December 9, 2013. Federal Information Security: Mixed Progress in Implementing Program Components; Improved Metrics Needed to Measure Effectiveness. GAO-13-776. Washington, D.C.: September 26, 2013. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. November 29, 2011. Information Technology: Department of Veterans Affairs Faces Ongoing Management Challenges. GAO-11-663T. Washington, D.C.: May 11, 2011. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Information Security: Veterans Affairs Needs to Resolve Long-Standing Weaknesses. GAO-10-727T. Washington, D.C.: May 19, 2010. Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing. GAO-10-513. May 27, 2010. Information Security: Agencies Need to Implement Federal Desktop Core Configuration Requirements. GAO-10-202. Washington, D.C.: March 12, 2010. Veterans: Department of Veterans Affairs’ Implementation of Information Security Education Assistance Program. GAO-10-170R. Washington, D.C.: December 18, 2009. Department of Veterans Affairs: Improvements Needed in Corrective Action Plans to Remediate Financial Reporting Material Weaknesses. GAO-10-65. Washington, D.C.: November 16, 2009. Information Security: Protecting Personally Identifiable Information. GAO-08-343. Washington, D.C.: January 25, 2008. Information Security: Sustained Management Commitment and Oversight Are Vital to Resolving Long-Standing Weaknesses at the Department of Veterans Affairs. GAO-07-1019. Washington, D.C.: September 7, 2007. Privacy: Lessons Learned about Data Breach Notification. GAO-07-657. Washington, D.C.: April 30, 2007. Information Security: Veterans Affairs Needs to Address Long-Standing Weaknesses. GAO-07-532T. February 28, 2007. Veterans Affairs: Leadership Needed to Address Information Security Weaknesses and Privacy Issues. GAO-06-866T. Washington, D.C.: June 14, 2006. Veterans Affairs: The Critical Role of the Chief Information Officer Position in Effective Information Technology Management. GAO-05-1017T. Washington, D.C.: September 14, 2005. Information Security: Weaknesses Persist at Federal Agencies Despite Progress Made in Implementing Related Statutory Requirements. GAO-05-552. Washington, D.C.: July 15, 2005. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. VA Information Technology: Progress Made, but Continued Management Attention Is Key to Achieving Results. GAO-02-369T. Washington, D.C.: March 13, 2002. VA Information Technology: Important Initiatives Begun, Yet Serious Vulnerabilities Persist. GAO-01-550T. Washington, D.C.: April 4, 2001. VA Information Technology: Progress Continues Although Vulnerabilities Remain. T-AIMD-00-321. Washington, D.C.: September 21, 2000. VA Information Systems: Computer Security Weaknesses Persist at the Veterans Health Administration. AIMD-00-232. Washington, D.C.: September 8, 2000. Information Security: Serious and Widespread Weaknesses Persist at Federal Agencies. AIMD-00-295. Washington, D.C.: September 6, 2000. Information Technology: VA Actions Needed to Implement Critical Reforms. AIMD-00-226. Washington, D.C.: August 16, 2000. Information Systems: The Status of Computer Security at the Department of Veterans Affairs. AIMD-00-5. Washington, D.C.: October 4, 1999. VA Information Systems: The Austin Automation Center Has Made Progress in Improving Information System Controls. AIMD-99-161. Washington, D.C.: June 8, 1999. Major Management Challenges and Program Risks: Department of Veterans Affairs. OCG-99-15. Washington, D.C.: January 1, 1999. Information Systems: VA Computer Control Weaknesses Increase Risk of Fraud, Misuse, and Improper Disclosure. AIMD-98-175. Washington, D.C.: September 23, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The use of information technology is crucial to VA's ability to carry out its mission of ensuring that veterans receive medical care, benefits, social support, and memorials. However, without adequate security protections, VA's systems and information are vulnerable to exploitation by an array of cyber-based threats, potentially resulting in, among other things, the compromise of veterans' personal information. GAO has identified information security as a government-wide high-risk area since 1997. The number of information security incidents reported by VA has more than doubled over the last several years, further highlighting the importance of securing the department's systems and the information that resides on them. GAO was asked to provide a statement discussing the challenges VA has experienced in effectively implementing information security, as well as to comment on a recently proposed bill aimed at improving the department's efforts to secure its systems and information. In preparing this statement GAO relied on previously published work as well as a review of recent VA inspector general and other reports related to the department's security program. GAO also analyzed the draft legislation in light of existing federal requirements and best practices for information security. The Department of Veterans Affairs (VA) continues to face long-standing challenges in effectively implementing its information security program. Specifically, from fiscal year 2007 through 2013, VA has consistently had weaknesses in key information security control areas (see table). In addition, in fiscal year 2013, the department's independent auditor reported, for the 12th year in a row, that weaknesses in information system controls over financial systems constituted a material weakness. Further, the department's inspector general has identified development of an effective information security program and system security controls as a major management challenge for VA. These findings are consistent with challenges GAO has identified in VA's implementation of its security program going back to the late 1990s. More recently, GAO has reported and made recommendations on issues regarding the protection of personally identifiable information at federal agencies, including VA. These were related to developing and implementing policies and procedures for responding to data breaches, and implementing protections when engaging in computerized matching of data for the purposes of determining individuals' eligibility for federal benefits. Draft legislation being considered by the Subcommittee addresses the governance of VA's information security program and security controls for the department's systems. It would require the Secretary of VA to improve transparency and coordination of the department's security program and ensure the security of its critical network infrastructure, computers and servers, operating systems, and web applications, as well as its core veterans health information system. Toward this end, the draft legislation prescribes specific security-related actions. Many of the actions and activities specified in the bill are sound information security practices and consistent with federal guidelines. If implemented on a risk-based basis, they could prompt VA to refocus its efforts on steps needed to improve the security of its systems and information. At the same time, the constantly changing nature of technology and business practices introduces the risk that control activities that are appropriate in the department's current environment may not be appropriate in the future. In light of this, emphasizing that actions should be taken on the basis of risk may provide the flexibility needed for security practices to evolve as changing circumstances warrant and help VA meet the security objectives in the draft legislation.
To protect its critical assets, DOD has established several protection measures for weapon systems. These measures include information assurance to protect information and information systems, software protection to prevent the unauthorized distribution and exploitation of critical software, and anti-tamper techniques to help delay exploitation of technologies through means such as reverse engineering when U.S. weapons are exported or lost on the battlefield. Examples of anti-tamper techniques include software encryption, which scrambles software instructions to make them unintelligible without first being reprocessed through a deciphering technique, and hardware protective coatings designed to make it difficult to extract or dissect components without damaging them. In 1999, the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L) issued a policy memorandum for implementing anti- tamper protection in acquisition programs. In the following year, AT&L issued a policy memorandum stating that technologies should be routinely assessed during the acquisition process to determine if they are critical and if anti-tamper techniques are needed to protect these technologies. In 2001, an AT&L policy memorandum designated the Air Force as the Anti- Tamper Executive Agent. The executive agent’s office, which currently has four staff, is responsible for implementing DOD’s anti-tamper policy and managing anti-tamper technology development through the Air Force Research Laboratory. The executive agent also holds periodic information sessions to educate the acquisition community about anti-tamper policy, initiatives, and technology developments. To coordinate activities, military services and defense agencies, such as the Missile Defense Agency, have an anti-tamper point of contact. Program managers are responsible for ensuring anti-tamper protection is incorporated on any weapon system with critical technologies that need protection. Since it is not feasible to protect every technology, program managers are to conduct an assessment to determine if anti-tamper protection is needed. When assessing if anti-tamper protection is needed, program managers make several key decisions regarding the identification of critical technologies, assessment of threats and vulnerabilities, and determination of anti-tamper techniques or solutions. The process begins with determining whether or not their system’s critical program information includes any critical technologies. If it is determined that the system has no critical technologies, program managers are to document the decision and request concurrence from either the office within their component that is designated with anti-tamper responsibilities or the Anti-Tamper Executive Agent. For systems that are determined to have critical technologies, the next key steps are to identify potential threats and vulnerabilities and select anti-tamper techniques to protect those technologies. Techniques are ultimately verified and validated by a team composed of representatives from the DOD components. The program manager documents decisions in an annex of the program protection plan. In 2004, we reported that program managers had difficulty in carrying out DOD’s anti-tamper policy on individual weapons, such as identifying critical technologies and experiencing cost increases or schedule delays when applying anti-tamper techniques—particularly when the techniques are not fully developed or when the systems are already in design or production. We made several recommendations, including increasing oversight over the identification of critical technologies across programs, improving tools and resources for program managers in identifying critical technologies, ensuring early identification of anti-tamper costs and solutions, monitoring the development of generic anti-tamper solutions and evaluating their effectiveness, and developing a business case to determine whether the current organizational structure and resources are adequate. DOD concurred or partially concurred with these recommendations. DOD has taken some steps to implement our recommendations including identifying available anti-tamper technical resources and developing a searchable spreadsheet of critical technologies, incorporating information in the Defense Acquisition Guidebook on the need for early identification of anti-tamper solutions in a weapon system, and sponsoring a study on anti-tamper techniques and their general effectiveness. While DOD has taken these steps to address parts of the recommendations, all remain open. DOD has recently taken several actions aimed at raising awareness about its anti-tamper policy and assisting program managers in implementing anti-tamper protection on a weapon system. Despite these actions, DOD still lacks departmentwide direction to implement its anti-tamper policy. Without such direction, DOD components are left to develop their own initiatives to assist program managers in implementing anti-tamper protection. While individual efforts are important, such as a database to track critical program information DOD-wide, their effectiveness may be limited because they have yet to be accepted and adopted across all DOD components. Since our 2004 report, DOD, through the Anti-Tamper Executive Agent, has developed some resources aimed at assisting program managers as they go through the anti-tamper decision process. DOD’s resources range from providing general information about the anti-tamper policy to research on anti-tamper solutions. Specifically, DOD has developed a guidebook that includes a checklist to assist program managers in identifying security, management, and technical responsibilities when incorporating anti-tamper protection on a weapons system; developed a searchable spreadsheet to assist program managers in developed a Web site for program managers to provide general anti- tamper information, policy resources, conference briefings, implementation resources, and current events; coordinated with Defense Acquisition University to design and launch an online learning module on anti-tamper protection; funded Sandia National Laboratories to study anti-tamper techniques and their general effectiveness; and sponsored research to develop generic anti-tamper techniques through Small Business Innovation Research, a research program that funds early-stage research and development projects at small technology companies. DOD has also updated two acquisition documents with general anti- tamper information. The first document—DOD Instruction 5000.2, Operation of a Defense Acquisition System—currently states that one of the purposes of the System Development and Demonstration phase of a weapon system is to ensure affordability and protection of critical program information by implementing appropriate solutions such as anti- tamper protection. The second document—the Defense Acquisition Guidebook—has been updated to include some basic information on the importance of implementing anti-tamper protection early in the development of a weapon system and describes program managers’ overall responsibilities for implementing the anti-tamper policy. While DOD has issued broad policy memorandums that reflect the department’s desire for routinely assessing weapon systems to determine if anti-tamper protection is needed, the department has not fully incorporated the anti-tamper policy into its formal acquisition guidance. Specifically, DOD Instruction 5000.2 mentions anti-tamper protection, but the department has not provided direction for implementation of anti- tamper in a formal directive or instruction. Currently, the department is coordinating comments on a draft instruction (DOD Instruction 5200.39) on protection of critical program information that includes anti-tamper implementation. However, in commenting on the draft instruction, several DOD components have raised concerns about when and how to define critical program information that warrants protection, which have contributed to long delays in finalizing the instruction. In addition, the department has not provided specific guidance for program managers on how to implement anti-tamper protection in a DOD manual because DOD officials said this process cannot begin until the instruction is finalized. The date for finalizing the instruction has not yet been determined. Officials from the executive agent’s office stated that departmentwide direction would give credence to the anti-tamper policy in practice. Anti- tamper points of contact told us that the policy memorandums are not sufficient to ensure that program managers are implementing anti-tamper protection on weapon systems when necessary. One service anti-tamper point of contact stated that program managers might disregard the policy memorandums because they are high-level and broad. Another service anti-tamper point of contact said that implementation is ultimately left up to the individual program manager. While a program manager’s decision should be approved by the milestone decision authority and documented in the program protection plan, some service and program officials said that programs are not always asked about anti-tamper protection during the review. Lacking departmentwide direction for the anti-tamper policy, DOD components have been left to develop their own initiatives to assist program managers in anti-tamper implementation. However, the usefulness of these initiatives depends on the extent to which other components participate in these efforts. For example, the Missile Defense Agency developed a risk assessment model to help program managers identify how much anti-tamper is needed to protect critical technologies. Specifically, the model helps program managers assess the criticality of the technology relative to the risk of exploitation. However, when the Missile Defense Agency sought comments on the initiative, the executive agent and services indicated that it was too lengthy and complex to use. The executive agent, in coordination with anti-tamper points of contact from the Missile Defense Agency and services, has taken over this effort, and it is still in development. The Navy is also implementing an initiative: a database intended to capture the information that programs across DOD components have identified as critical. Many officials we spoke with pointed to this database as a potential tool to improve identification of critical program information across DOD components. To date, the Navy and the Army are submitting information for the database, but the Missile Defense Agency and Air Force are not. The Missile Defense Agency anti-tamper point of contact stated that its information is classified at a level above what the database can support and its program managers will not submit information for the database unless DOD requires submissions by all DOD components. However, the Missile Defense Agency does have access to the database and uses it as a cross-check to determine if it is identifying similar critical program information. The Air Force has been briefed on the initiative but does not yet have consent from all of the commands to participate. Without full participation across all DOD components, the usefulness of this database as a tool to identify critical technologies that may need anti- tamper protection will be limited. To determine whether anti-tamper protection is needed, program managers must identify which technologies are deemed critical, determine the potential threats and vulnerabilities to these technologies, and identify sufficient anti-tamper solutions to protect the technologies. Such decisions involve a certain level of subjectivity. However, program managers lack the information or tools needed to make informed assessments at these key decision points. As a result, some technologies that need protection may not be identified or may not have sufficient protection. Determining technologies that are critical is largely left to the discretion of the program managers. While DOD has some resources available to program managers to help identify critical technologies, they may be of limited use. For example, the executive agent’s searchable spreadsheet of critical technologies may not be comprehensive because it relies on DOD’s Militarily Critical Technologies List, which we reported in 2006 was largely out of date. Also, some program offices have used a series of questions established in a 1994 DOD manual on acquisition systems protection to help guide their discussions on what is critical. However, these questions are broad and subject to interpretation, and can result in different conclusions, depending on who is involved in the decision-making process. In addition, identifying what is critical varies by DOD component and sometimes by program office. For example, one Air Force program office tried various approaches, including teams of subject matter experts, over 2 years to identify its list of critical program information. In contrast, the Army took the initiative to establish a research center to assist program managers in identifying critical program information, but Army officials stated that the approach used by the center has led to an underestimating of critical program information and critical technologies in programs. At the same time, there has been limited coordination across programs on technologies that have been identified as critical—creating a stove piped process—which could result in one technology being protected under one program and not protected under another. While informal coordination can occur, programs did not have a formal mechanism for coordinating with other programs, including those within their service. For example, officials from one program office stated they had little interaction from programs within their service or other services to ensure protection of similar technologies. A program under one joint program executive office had not coordinated with other programs to identify similar technologies as critical. In addition, according to an Army official, contractors who have worked on programs across services have questioned why one service is applying anti-tamper solutions to a technology that another service has not identified as critical. Finally, one program office we spoke with identified critical program information on its system but indicated that a similar system in another service had not identified any critical program information and, therefore, had no plans to implement anti- tamper protection. Despite the risk that some technologies that need protection may not be identified or may not be protected across programs, no formal mechanism exists within DOD to provide a horizontal view of what is critical. However, any effort to do so could be undermined by the programs’ and services’ different definitions and interpretations of “critical program information” and “critical technologies.” The Anti-Tamper Executive Agent defines critical program information as capturing all critical technologies. In contrast, the Army’s interpretation is that critical program information only includes critical technologies that are state-of-the-art. For the Navy, critical program information includes software, while hardware is part of what the Navy defines as critical technologies. One program that is part of a joint program office identified critical program information as including company proprietary information. As a result, tracking critical program information may not provide a horizontal view of all technologies services and programs have identified as needing anti-tamper protection. Once a program office identifies critical technologies, the next step in the anti-tamper decision process is to identify threats to those technologies. DOD’s Program Manager’s Guidebook and Checklist for Anti-tamper states that multiple threat assessments should be requested from either the service intelligence organization or counterintelligence organization. One program office we visited stated that it has requested and received multiple threat assessments from the intelligence community, which have sometimes contradicted one another, leaving the program office to decipher the information and determine the threat. According to an anti- tamper point of contact, other programs have received contradictory information—typically relating to foreign countries’ capabilities to reverse engineer. The potential impact of contradictory intelligence reports is twofold: If the threat is deemed to be low but is actually high, the technology is susceptible to reverse engineering; conversely, if the threat is deemed to be high and is actually low, the anti-tamper solution is more robust than needed. To assist with the process of identifying threats, program offices may request threat assessments from a group within the Defense Intelligence Agency. However, this group was not able to complete assessments for approximately 6 months during 2006. While the group has resumed completing assessments, an agency official stated that it is not able to produce as many assessments as before due to limited resources. The Defense Intelligence Agency does not turn down program offices that may request assessments, but does have to put them in a queue and provide them with previous assessments, if they exist, until it can complete a full assessment for the program office. One program office indicated that it took 6 to 9 months for the agency to complete its assessment. Program managers also lack the tools needed to identify the optimal anti- tamper solutions for those critical technologies that are vulnerable to threats. Most notably, program managers lack a risk model to assess the relative strengths of different anti-tamper solutions and a tool to help estimate their costs. According to National Security Agency officials, who are available to provide support to program managers considering or implementing anti- tamper protection, program managers and contractors sometimes have difficulty determining appropriate solutions. Four of five programs we spoke with that had experience in this area of the anti-tamper decision process had difficulty identifying how much anti-tamper protection was enough to protect a critical technology. For example, one program official told us that an anti-tamper solution developed for one of the program’s critical technologies may not be sufficient to prevent reverse engineering. Another program office stated that it is difficult to choose between competing contractors without knowing how to determine the appropriate level of anti-tamper protection needed. An anti-tamper point of contact said that program managers need a tool to help them assess the criticality of a technology versus the types of threats to that technology. Implementing a suboptimal anti-tamper solution can have cost and performance implications for the program. Specifically, if the solution provides less anti-tamper protection than is needed, the program may have to retrofit additional anti-tamper protection to allow for a more robust solution. Not only can such retrofitting add to a program’s costs, it can compromise performance. Given limited resources and tools for determining anti-tamper solutions, some program office officials told us that to satisfy anti-tamper solutions they relied on other protection measures. For example, officials in one program office stated that anti-tamper protection and information assurance were interchangeable and indicated that following the National Security Agency’s information assurance requirements—which number in the hundreds—should be sufficient as an anti-tamper solution for this system. This same program was not aware of anti-tamper resources and did not coordinate with an anti-tamper validation and verification team on its solutions. Also, an official from another program office indicated that anti-tamper protection and information assurance are similarly defined. While DOD and service officials agreed that some information assurance and anti-tamper measures may overlap, fulfilling information assurance requirements does not guarantee a sufficient anti-tamper solution. In establishing various policies to protect its critical assets, DOD saw anti- tamper as a key way to preserve U.S. investment in critical technologies while operating in an environment of coalition warfare and a globalized industry. Program managers are ultimately responsible for implementing DOD’s anti-tamper policy. However, a lack of direction, information, and tools from DOD to implement its policy has created significant challenges for program managers. Further, this policy can compete with the demands of meeting program cost and schedule objectives, particularly when the optimal anti-tamper solution is identified late in the schedule. Until DOD establishes a formal directive or instruction for implementing its policy departmentwide and equips program managers with adequate implementation tools, program managers will continue to face difficulties in identifying critical technologies and implementing anti-tamper protection. As DOD examines its policies for protecting critical assets, we are recommending that the Secretary of Defense direct the Under Secretary of Acquisition, Technology, and Logistics, in coordination with the Anti- Tamper Executive Agent and the Under Secretary of Defense for Intelligence, to issue or be involved in developing and providing departmentwide direction for application of its anti-tamper policy that prescribes how to carry out the policy and establishes definitions for critical program information and critical technologies. To help ensure the effectiveness of anti-tamper implementation, we also recommend that the Secretary of Defense direct the Anti-Tamper Executive Agent to identify and provide additional tools to assist program managers in the anti-tamper decision process. In written comments on a draft of this report, DOD concurred with our recommendation that the Secretary of Defense direct the Anti-Tamper Executive Agent to identify additional tools to assist program managers in the anti-tamper decision process. DOD stated that the Anti-Tamper Executive Agent is drafting Anti-Tamper Standard Guidelines to facilitate proper implementation of anti-tamper protection across the department. DOD did not concur with our recommendation that the Secretary of Defense direct the Under Secretary of Defense (AT&L) in coordination with the Anti-Tamper Executive Agent and the Under Secretary of Defense (Intelligence) to issue departmentwide direction for application of its anti- tamper policy that prescribes how to carry out the policy and establishes definitions for critical program information and critical technologies. DOD stated that the Under Secretary of Defense (Intelligence) has primary responsibility for DOD Directive 5200.39, a security and counterintelligence support directive to acquisition programs, and its successor, DOD Instruction 5200.39 regarding protection of critical program information. The Under Secretary of Defense (Intelligence) is currently coordinating an update to this directive. Once it is issued, the department plans to update DOD 5200.1-M, which provides the execution standards and guidelines to meet the DOD Instruction 5200.39 policy. While DOD has issued broad policy memorandums beginning in 1999 that reflect the department’s desire for routinely assessing weapon systems to determine if anti-tamper protection is needed, the department has not fully incorporated anti-tamper policy into its formal acquisition guidance. As we have reported, service officials indicated collectively that these policy memorandums are high-level, broad, and leave implementation ultimately up to the individual program manager. DOD did not indicate when the update of DOD Directive 5200.39 might be complete and guidance on anti- tamper implementation issued. We continue to believe that such direction is currently needed and that the Under Secretary of Defense for Acquisition, Technology, and Logistics, who issued the policy memorandums and is responsible for anti-tamper policy, should be involved in developing and providing the appropriate direction whether it be the update to DOD Directive 5200.39 or another vehicle. That direction should include how to implement the anti-tamper policy and how critical program information and critical technologies are defined. We continue to believe that the direction, which has been lacking since the policy was initiated in 1999, should not be further delayed. If DOD continues to experience delays in updating DOD Directive 5200.39, it should consider interim measures to meet the immediate need for anti-tamper direction. DOD’s letter is reprinted in appendix II. We are sending copies of this report to interested congressional committees, as well as the Secretary of Defense; the Director, Office of Management and Budget; and the Assistant to the President for National Security Affairs. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or calvaresibarra@gao.gov if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Others making key contributions to this report are listed in appendix III. To identify actions the Department of Defense (DOD) has taken to implement its anti-tamper policy since 2004, we reviewed DOD policies and guidance governing anti-tamper protection on weapon systems and obtained documents on various initiatives. We interviewed officials from the Anti-Tamper Executive Agent, military services, and other DOD components such as the Missile Defense Agency; Acquisition, Technology and Logistics; Defense Intelligence Agency; National Security Agency; and the Air Force Research Laboratory about initiatives or actions taken regarding anti-tamper. Through these interviews and documents, we also determined the status of our 2004 anti-tamper report recommendations. We interviewed DOD officials from Networks and Information Integration, Science and Technology, and Counterintelligence to discuss anti-tamper protection and how it relates to other program protection measures. To determine how program managers implemented DOD’s anti-tamper policy, we interviewed officials from 14 program offices. We are not identifying the names of the programs due to classification concerns. We conducted structured interviews with 7 of the 14 program offices to discuss and obtain documents about their experiences with implementing the anti-tamper decision process and identify any challenges they faced. We selected 6 of these programs from a list of weapon systems identified in Anti-Tamper Executive Agent, services, and component documents as considering and/or implementing anti-tamper protection and a seventh program considering anti-tamper that we identified during the course of our fieldwork. Systems we selected represented a cross section of acquisition programs and various types of systems in different phases of development. For the remaining programs, we interviewed 7 not identified by the Anti-Tamper Executive Agent or the services as considering and/or implementing anti-tamper to obtain their viewpoints on DOD’s anti-tamper policy and implementation. We selected these programs by identifying lists of DOD acquisition programs and comparing them to the Anti-Tamper Executive Agent’s, services’, and components’ lists of program considering and/or implementing anti-tamper. We did not evaluate whether programs had implemented sufficient anti-tamper protection. In addition to the contact named above, Anne-Marie Lasowski (Assistant Director), Gregory Harmon, Molly Whipple, Karen Sloan, John C. Martin, and Alyssa Weir made major contributions to this report. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Export Controls: Challenges Exist in Enforcement of an Inherently Complex System. GAO-07-265. Washington, D.C.: December 20, 2006. Defense Technologies: DOD’s Critical Technologies Lists Rarely Inform Export Control and Other Policy Decisions. GAO-06-793. Washington, D.C.: July 28, 2006. President’s Justification of the High Performance Computer Control Threshold Does Not Fully Address National Defense Authorization Act of 1998 Requirements. GAO-06-754R. Washington, D.C.: June 30, 2006. Export Controls: Improvements to Commerce’s Dual-Use System Needed to Ensure Protection of U.S. Interests in the Post-9/11 Environment. GAO-06-638. Washington, D.C.: June 26, 2006. Defense Trade: Enhancements to the Implementation of Exon-Florio Could Strengthen the Law’s Effectiveness. GAO-05-686. Washington, D.C.: September 28, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. Defense Trade: Arms Export Control Vulnerabilities and Inefficiencies in the Post-9/11 Security Environment. GAO-05-468R. Washington, D.C.: April 7, 2005. Defense Trade: Arms Export Control System in the Post-9/11 Environment. GAO-05-234. Washington, D.C.: February 16, 2005. Defense Acquisitions: DOD Needs to Better Support Program Managers’ Implementation of Anti-Tamper Protection. GAO-04-302. Washington, D.C.: March 31, 2004. Defense Trade: Better Information Needed to Support Decisions Affecting Proposed Weapons Transfers. GAO-03-694. Washington, D.C.: July 11, 2003.
The Department of Defense (DOD) invests billions of dollars on sophisticated weapon systems and technologies. These may be at risk of exploitation when exported, stolen, or lost during combat or routine missions. In an effort to minimize this risk, DOD developed an anti-tamper policy in 1999, calling for DOD components to implement anti-tamper techniques for critical technologies. In March 2004, GAO reported that program managers had difficulties implementing this policy, including identifying critical technologies. This follow-up report (1) describes recent actions DOD has taken to implement its anti-tamper policy and (2) identifies challenges facing program managers. GAO reviewed documentation on actions DOD has taken since 2004 to implement its anti-tamper policy, and interviewed officials from the Anti-Tamper Executive Agent's Office, the military services, other DOD components, and a cross-section of program offices. Since 2004, DOD has taken several actions to raise awareness about anti-tamper protection and develop resources that provide program managers with general information on its anti-tamper policy. These actions include developing a Web site with anti-tamper information and events, establishing an online learning module on anti-tamper protection, and sponsoring research on generic anti-tamper techniques. However, DOD lacks departmentwide direction for implementation of its anti-tamper policy. Without such direction, individual DOD components are left on their own to develop initiatives. For example, the Navy is developing a database that is intended to provide a horizontal view of what DOD components have identified as critical program information. While many officials we spoke with pointed to this database as a potential tool for identifying critical technologies that may need anti-tamper protection, the database is currently incomplete. Specifically, the Missile Defense Agency is not providing information because its information is classified at a level above what the database can support. Also, the Air Force is not currently providing information because not all commands have provided consent to participate. At the same time, program managers face challenges implementing DOD's anti-tamper policy--due largely to a lack of information or tools needed to make informed assessments at key decision points. First, program managers have limited information for defining what is critical or insight into what technologies other programs have deemed critical to ensure similar protection across programs. Determining whether technologies are critical is largely left to the discretion of the individual program manager, resulting in an uncoordinated and stove piped process. Therefore, the same technology can be identified as critical in one program office but not another. Second, program managers have not always had sufficient or consistent information from the intelligence community to identify threats and vulnerabilities to technologies that have been identified as critical. The potential impact of inconsistent threat assessments is twofold: If the threat is deemed to be low but is actually high, the technology is susceptible to tampering; conversely, if the threat is deemed to be high and is actually low, an anti-tamper solution is more robust than needed. Finally, program managers have had difficulty selecting sufficient anti-tamper solutions--in part because they lack information and tools, such as risk and cost-estimating models, to determine how much anti-tamper protection is needed. As a result, program managers may select a suboptimal solution. Given these combined challenges, there is an increased risk that some technologies that need protection may not be identified or may not have sufficient protection.
Foodborne illnesses constitute a major public health problem in the United States. In May 1996, we reported that up to 81 million cases of foodborne illnesses and as many as 9,100 deaths associated with those illnesses are estimated to occur each year. While foodborne illnesses are often temporary maladies that may not require medical treatment, they can sometimes cause acute and chronic illnesses, such as kidney failure in infants and young children, stillbirths, and various types of arthritis. According to the U.S. Department of Agriculture’s Economic Research Service, in 1996, the estimated annual cost of medical treatments and productivity losses associated with these illnesses ranged from $6.5 billion to $37.1 billion. The actual number of foodborne illnesses, however, is unknown because many people who become ill do not seek treatment, and doctors may not associate the illnesses they do see with a food source or, if they do, report it to state or local health agencies. Even when a foodborne illness is reported, health agencies may not be able to trace the illness to a specific food or its origin. A growing percentage of the U.S. food supply is imported. The sheer volume of these imports, along with the difficulty in ensuring that they are safe, adds to the risk of foodborne illnesses. As shown in table 1.1, the import share of some commonly consumed foods is increasing. For example, in 1995, one-third of all fresh fruits consumed in the United States were imported. Some imported foods pose a significant risk of foodborne illness. They can introduce pathogens previously uncommon in the United States, such as new strains of Salmonella and the Cyclospora parasite. Imported foods may also contain pathogens, such as hepatitis A, that cannot be easily detected until illness breaks out. (App. I provides information on selected recent outbreaks of foodborne illness related to imported foods.) As the percentage of imported foods consumed in the United States increases, the importance of ensuring that these foods are safe increases as well. Ensuring food safety therefore cannot be achieved by focusing on domestic products exclusively. Two federal agencies have the primary responsibility for ensuring the safety of imported foods. The Food Safety and Inspection Service (FSIS) in the U.S. Department of Agriculture (USDA) is responsible for meat, poultry, and some egg products. The Food and Drug Administration (FDA) in the Department of Health and Human Services (HHS) is responsible for all other foods. Under the Federal Meat Inspection Act, the Poultry Products Inspection Act, and the Egg Products Inspection Act, as amended, FSIS works to ensure that products moving in interstate and foreign commerce are safe and wholesome, and correctly labeled and packaged. In calendar year 1997, FSIS used about 84 staff years, costing an estimated $3.2 million, to review about 118,000 import shipments and to determine that exporting countries met U.S. food safety requirements. Under the Federal Food, Drug, and Cosmetics Act, as amended, FDA works to ensure that domestic and imported food products are safe, wholesome, and properly labeled. In fiscal year 1997, FDA spent approximately 463 staff years (inspectors, laboratory staff, and support staff), at a cost of approximately $35.1 million, to ensure the safety of about 2.7 million imported food shipments. To assist these agencies, the U.S. Customs Service (Customs) in the Department of the Treasury and HHS’ Centers for Disease Control and Prevention (CDC) provide a number of services, including referring imported shipments for inspection and providing information on outbreaks of foodborne illnesses. Customs is the first federal agency to screen imported products, including food imports, when they enter the United States. Enforcing laws for over 40 federal agencies, Customs has, among other duties, the responsibility for collecting revenues from importers and enforcing various customs and related laws. Customs cooperates with FDA and FSIS in carrying out their regulatory roles in food safety. CDC is the federal agency primarily responsible for monitoring the incidence of foodborne illness in the United States. CDC assists state and local health departments and other federal agencies in investigating outbreaks of foodborne illness, monitors information on foodborne illnesses, and conducts research related to these illnesses. Since 1992, we have frequently reported on the fragmented and inconsistent organization of food safety responsibilities in the federal government. These reviews have shown that inconsistencies and differences between the agencies’ approaches and enforcement authorities undercut overall efforts to ensure a safe food supply. To address this problem, we recommended the formation of a single food agency. In the fiscal year 1998 appropriation act for USDA, the Congress provided $420,000 for a study by the National Academy of Sciences on the need to reorganize the federal food safety system. FDA and FSIS are the two agencies responsible for ensuring that the imported shipments of food entering the United States are safe. Their systems for inspecting, testing, and approving the release of these food import shipments operate independently of each other. To ensure that FDA is notified of all imported food products under its jurisdiction, an importer must file both an import notice and certain shipping information and, for shipments valued over $1,250, a bond to cover the goods for release with Customs within 5 days of the shipment’s arrival at a U.S. port of entry. The import documents or electronic entry data identify the type of food product, the importer, foreign manufacturer, and country of origin. The bond, which covers potential duties, taxes, and penalties, may allow the importer to retain control of the shipment until FDA decides to inspect samples, test, or release it. If an importer fails to make an import shipment available for FDA’s inspection, fails to recondition, or fails to destroy or re-export the shipment, as directed by FDA, Customs may collect penalties against all or part of the bond value. FDA relies on several sources of information to determine whether an imported food shipment will be inspected or tested or can be released into U.S. commerce. Among these sources are the following: FDA’s annual work plan. The annual work plan establishes, among other activities, the number of inspections and tests that each FDA district office is to conduct, which are derived from guidance in specific food programs.For example, the work plan for fiscal year 1997 set inspection and testing activities for 10 imported food programs, such as imported low-acid/acidified canned foods and imported seafood, in four major project areas related to food safety—Foodborne Biological Hazards; Pesticide and Chemical Contaminants; Molecular Biology and Natural Toxins; and Food and Color Additives. FDA’s Import Alert Retrieval System database. This database contains a list of products that FDA automatically detains because the exporter or the specific food products have shown a history of violations in previous shipments. FDA will not approve the release into U.S. commerce of these automatically detained shipments until the importer shows that the product is not in violation, usually by providing the results of a private laboratory analysis. FDA disseminates information on automatic detentions to district offices through import alerts, which identify problem commodities and/or exporters, foreign firms, the country of origin, the reasons for detention, and the food safety risk. FDA’s Low-Acid Canned Food database. This database contains information on foreign processors of low-acid and acidified canned foods registered with FDA. Foreign processors wishing to export these foods to the United States must submit descriptions of their canning processes to FDA before it will issue a registration number for the firm and permit the entry of the firm’s shipments into U.S. commerce. The descriptions include the manufacturing methods used to prevent spoilage and contamination. FDA issues each foreign establishment a registration number to help track the firm’s registration and processing records. To assist FDA in reviewing all shipments, Customs’ computer system uses the information provided by the importer and FDA-developed screening rates to determine which shipments to automatically release into domestic commerce and which shipments to review further. FDA sets the screening rates using several sources of information, such as the annual work plan, compliance programs, type of product, and past violations of products or shippers. Most shipments that are believed to pose minimal safety risks, such as candy and dried pasta products, are frequently released automatically because they have low screening rates. FDA releases these shipments a few minutes after the importer enters the information. Other shipments, such as some seafood and low-acid canned foods, are less frequently or never released automatically, because they pose greater potential risks. Customs forwards information on products that are not automatically released to FDA for further review, through FDA’s automated screening system, known as the Operational and Administrative System for Import Support (OASIS). This system was pilot-tested in 1992 and installed at all FDA’s district offices by October 1997. (Before OASIS was developed, FDA manually tracked shipments through entry documents submitted by importers to Customs.) Along with the electronic information provided by the importer, FDA officials use the information in OASIS and other sources as needed—such as the databases with information on products to be automatically detained and registration numbers for foreign firms—to determine which samples of imported food shipments should be held for further action, such as inspection and/or laboratory testing, and which can be released without further review. FDA releases most shipments not requiring further review within 3 hours after the importer enters the information. FDA does not visually check or inspect these released shipments. FDA annually inspects or conducts laboratory analyses on a small percentage—currently less than 2 percent—of all types of imported food shipments. Inspections may occur at ports of entry and at warehouses or other business establishments. If FDA decides to test an imported food shipment, an FDA inspector collects a sample from the shipment and sends it to a FDA laboratory for analysis. (FDA maintains a record of all laboratory test results in its Laboratory Management System database.) For samples found to comply with U.S. standards, FDA notifies Customs and the importer that the shipment can be released. For samples found to violate these standards, FDA notifies Customs and the importer that the shipment has been refused entry into U.S. commerce. Importers generally have three options for handling shipments refused entry. If FDA concurs, importers can recondition the shipment. Otherwise, they must either destroy or re-export the shipment. Whatever option the importer chooses, Customs officials are required to supervise proper disposition of the refused shipment. Before foreign firms can export meat and poultry to the United States, FSIS must have determined that the exporting country has a food safety system for these products that is equivalent to the U.S. system. Unlike FDA, FSIS inspectors visually check every imported shipment of foods under their jurisdiction for correct documentation, transportation damage, and correct labeling at FSIS-approved import inspection stations. FSIS conducts more intensive inspections and tests on a portion of the imported shipments—about 20 percent in 1997—to verify the effectiveness of the foreign food safety system. FSIS calls this process “reinspection” because the product has already passed inspection by the exporting country’s equivalent inspection system. Importers of FSIS-regulated products, like importers of FDA-regulated products, must file an import notice and a bond with Customs within 5 days of the date that a shipment arrives at a port of entry to cover their goods for release. Unlike FDA, however, importers must hold shipments at FSIS-registered warehouses for FSIS’ inspection until these shipments are released into the domestic market or refused entry. FSIS inspectors enter the information provided by importers—such as country of origin, foreign manufacturer, exporting country’s health certification, and type of product—into a centralized computer system. This computer system, which was installed in 1979, is known as the Automated Import Information System (AIIS). The system scans the information it contains to determine if the country, plant, and product are eligible for import into the United States and whether the shipment will be allowed entry with only a visual check or be subjected to more intensive inspections and tests. The AIIS system uses computer-assigned screening procedures and individual plants’ performance histories to target shipments for more intensive inspection and testing. Under the system, one violation on the previous shipment of a particular product, such as boneless beef, triggers more intensive inspection and testing for the same type of product from the same foreign firm until FSIS has found at least 10 successive shipments that are free of violations and meet U.S. standards. Violations that generate more intensive inspections include food products that contain chemical residues or bone fragments, have misidentified products, or have microbial contamination. If the imported products do not meet U.S. requirements, they are stamped “U.S. Refused Entry” and must be exported, destroyed, or converted to animal food. FSIS uses information on refused shipments to plan inspections in foreign countries. Concerned over recent foodborne illnesses associated with imported foods, the Chairman, Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs, asked us to review federal programs’ efforts to ensure the safety of imported foods. Specifically, this report discusses (1) the differences in the agencies’ authorities and approaches for ensuring the safety of imported foods and (2) the agencies’ efforts to target their resources. In addition, the report discusses weaknesses in controls over food imports. Our work focused on the two principal federal agencies with responsibility for ensuring the safety of imported foods—FDA and FSIS. We also conducted work at Customs and CDC. We reviewed agency and public information on foodborne illnesses and their relationship to imported foods. We also spoke with FDA, FSIS, and CDC officials about the link between foodborne illnesses with imported foods. We reviewed information from USDA to determine the current level of food imports into the United States, the share of imported foods in the U.S. diet, and the costs associated with foodborne illnesses. To examine the major authorities guiding the federal agencies responsible for imported food safety, we reviewed the federal laws and regulations governing imported foods. We also reviewed FDA’s and FSIS’ documents describing their procedures for ensuring the safety of imported foods, and we met with agency officials to discuss their approach to inspecting imports. We also discussed with FDA officials proposals to change FDA’s statutory authority and to expand the import inspection program. We reviewed various studies on the effectiveness of different inspection approaches for ensuring the safety of imported foods. We analyzed agency data on resources used, import entries reviewed, and inspection actions taken. To evaluate the approaches each agency uses to target imports for examination, we reviewed agencies’ documents describing their import review procedures and the use of automated systems to screen imports. We discussed these procedures and systems with FDA and FSIS officials. We observed and analyzed the agencies’ automated screening processes, physical inspections, and sample collections at FDA’s and FSIS’ field offices in California, Florida, New York, New Jersey, Texas, and Washington State. We visited three FDA laboratories to discuss and observe analysis procedures. We met with Customs officials in Laredo, Texas; Los Angeles and San Francisco, California; Miami, Florida; Port Elizabeth, New Jersey; and Seattle, Washington; to discuss and observe how FDA and FSIS work with Customs to handle the initial review of imported foods. In the course of this review, we discussed and reviewed activities related to controls over imported foods in the field offices we visited. These activities included FDA’s reliance on laboratory analysis provided by importers, and agencies’ practices and procedures for (1) controlling imports before their release into domestic commerce, (2) ensuring that refused entries are properly disposed of, and (3) levying penalties against violators. We performed our work from June 1997 through April 1998 in accordance with generally accepted government auditing standards. FSIS shares the burden of ensuring the safety of the imported foods it regulates with the exporting country, while FDA primarily relies on inspections at the U.S. ports of entry to determine the safety of the imported foods under its jurisdiction. Before it will allow a country to export meat and poultry to the United States, FSIS is required to determine that the exporting country has a food safety inspection system for these products that is equivalent to the U.S. system. By ensuring that countries exporting meat and poultry to the United States have adopted practices that protect their products from contamination, FSIS can devote its energies to verifying the efficacy of these exporting countries’ systems and thereby use its inspection resources more efficiently. FDA does not have the authority to impose such a requirement on foreign countries for fish, fruits, vegetables, and the other foods for which it is responsible. Lacking the authority to ensure that exporting countries are adopting safe practices, FDA has to rely on labor-intensive inspections of imported products at the port of entry as its primary line of defense against the entry of unsafe foods. Because FDA is currently able to inspect less than 2 percent of the foods imported under its jurisdiction there is reason to question whether this approach adequately protects U.S. consumers. Providing FDA with authority similar to FSIS’ would allow it to leverage its resources and provide greater assurance that the imported foods it is responsible for are safe. Federal laws on meat and poultry imports require that the products shipped to the United States meet U.S. standards for safety and wholesomeness, and comply with U.S. labeling and packaging requirements. Before a country can export meat and poultry to the United States, it must demonstrate that it has a food inspection system that is at least equivalent to the U.S. system. That is, the exporting country’s inspection system must include, among other components, competent, qualified inspectors with the authority to enforce national food safety laws and regulations; administrative and technical support for these inspectors; and the implementation of inspection, sanitation, quality, microbiological, and residues standards equivalent to those applied to U.S. products. In implementing this requirement, FSIS requires exporting countries to apply for eligibility to export meat and poultry products to the United States, to supply health certificates attesting to the safety of the product with each exported item, and to submit exports for inspection at the U.S. border to verify the effectiveness of the foreign inspection system. FSIS staff visit foreign countries and firms annually to verify the effectiveness of their systems. In 1997, for example, FSIS staff visited 30 of the 37 eligible exporting countries to verify that the countries had changed their systems to include new safety procedures required for all domestic and foreign firms. These new procedures, called Hazard Analysis and Critical Control Point (HACCP), build science-based food safety controls into food production systems. Food firms incorporate controls into processing steps, maintain records of compliance with controls, and are subject to audits of their records to verify the program’s effectiveness. As of January 1, 1998, FSIS had determined that 37 countries have food inspection systems equivalent to the United States’ and are eligible to export meat and/or poultry products to this country. Products from countries not on the list of eligible countries are automatically refused entry. FDA does not have similar authority to accept only foods from countries with equivalent safety inspection systems. The Federal Food, Drug, and Cosmetics Act, which covers most food items other than meat and poultry, requires imported products to comply with U.S. standards for purity, wholesomeness, safety, and hygiene. It does not, however, require the exporting countries to have inspection systems equivalent to the U.S. system. Accordingly, FDA must, with few exceptions, rely on inspections and tests of selected imported foods at the U.S. port of entry as the only defense against unsafe foods entering the United States. For a few products (infant formula and low-acid and acidified canned foods), FDA may request that foreign exporting firms grant FDA inspectors access to their plants, but these inspectors actually conduct few foreign plant inspections. In fiscal year 1996, FDA planned 90 such inspections but carried out only 9. FDA planned 37 such inspections in fiscal year 1997, carrying out 29. Although FDA cannot currently require countries to demonstrate that they have equivalent inspection systems before granting them authority to export to the United States, it can negotiate voluntary agreements with individual countries to establish equivalent inspection systems. For example, in 1997, FDA began an intensified effort to develop equivalency agreements, on a voluntary basis, with the major seafood exporting countries, in response to new regulations requiring all seafood producers selling to the U.S. market to use new HACCP procedures. However, FDA officials said the agency has not strongly pursued equivalency agreements on a broad scale because the effort would require considerable resources to review foreign countries’ food safety systems. In addition, a single agreement with each country might not be adequate because many countries have multiple food safety programs for different food products or even for different stages of preparation for the same product for export. For example, one foreign agency may be responsible for the safety of fresh produce, while another agency may be responsible for processed produce. where equivalence has been determined to exist . . . the work of the foreign regulatory authority should serve to help ensure the safety of imports for U.S. consumers. Since the foreign inspection system will have been found to be equivalent to FDA’s inspection system, FDA will be able to rely on the results for the foreign inspection system. . . . As equivalence is achieved, and agreements are reached recognizing the achievement of equivalence, trade is likely to flow more freely because of the reduced need by importing countries to engage in resource-intensive sampling and examination of products being offered for entry from countries with equivalent systems. For the United States, equivalency agreements will also mean that FDA will be able to target the limited resources it has for imports towards products from countries that have not been determined to be equivalent. Thus, FDA will be able to use its resources more efficiently and effectively. In October 1997, as part of the administration’s food safety initiative, the President directed FDA to seek new authority to require equivalency in food safety systems. In response, FDA developed proposed legislation for new discretionary authority that would allow the agency to prohibit imports of some foods, unless the exporting country demonstrates that the food safety system and conditions in the exporting country achieve the same level of protection as food prepared and packed in the United States. Legislation was introduced in the House of Representatives in November 1997 and in the U.S. Senate in March 1998, and is under consideration.The legislation would allow FDA to determine that an imported food is adulterated, and thus cannot be imported, if the foreign system, conditions, or measures for preparing or packing the food product are not equivalent to the level of protection required for similar foods produced in the United States. FSIS uses its equivalency authority to shift the primary responsibility for food safety to the exporting country. Rather than focusing on resource-intensive port-of-entry inspections, FSIS emphasizes reviews of exporting countries’ compliance with U.S. requirements. In contrast, FDA relies on port-of-entry inspections to ensure that imported foods are safe. This approach does little to verify the safety of all imported foods because it does not account for the conditions under which the products were processed and packed. The efficacy of port-of-entry inspections therefore depends on inspecting an adequate sample of imports, an objective FDA has not been able to meet, particularly as import volumes have increased. In addition, inspections of imported foods may be insufficient to determine whether contamination has occurred. For example, both visual inspections and laboratory tests are inadequate to detect Cyclospora, according to CDC. By requiring exporting countries to assume responsibility for the safety of meat and poultry products sent to the United States, FSIS can extend the coverage and enhance the effectiveness of its inspection resources. In 1997, FSIS had about 12 staff involved in reviewing the continuing eligibility of foreign countries to export their meat and poultry products to the United States, through document reviews and regular inspections in those countries. It also deployed about 75 inspectors to (1) ensure that each imported shipment had a health certificate from the exporting country, (2) visually check every shipment for transportation damage and accurate shipping labels, and (3) conduct intensive inspections and tests on a sample of products as a way of verifying the performance of the exporting country’s system. This approach allows FSIS to transfer the primary food safety responsibility to the exporting country. FSIS considers the eligible foreign country’s inspection system—not its own inspection at the port of entry—to be the primary control for ensuring that imported meat and poultry products meet U.S. standards. If a country fails to maintain an equivalent safety system, FSIS can suspend the eligibility of that country to export FSIS-regulated products to the United States. FDA’s reliance on inspecting imported foods at the U.S. port of entry provides weak assurance that the foods it allows to enter the United States are safe. According to the United Nation’s Food and Agriculture Organization, testing products at the port of entry involves a concentration of inspection resources on the imported product itself and is an attempt to compensate for a lack of knowledge about the processing, hygiene, and sanitation practices of the producer. In addition, FDA’s draft guidance on equivalency criteria states that, by itself, end-product inspection and testing at the port of entry cannot be relied upon to provide adequate protection because assurance that food will not present unacceptable risks requires effective processing controls that are periodically inspected and verified by a regulatory authority. Similarly, a 1991 report by the Advisory Committee on the Food and Drug Administration called point-of-entry inspections an anachronism. The process of inspecting a final product to determine if it conforms to standards and of rejecting those that do not has been “totally discredited,” according to the committee, as a means of ensuring manufacturing quality or regulatory compliance for domestic products. Likewise, in 1994, we reported that reliance on end-product testing was an ineffective, resource-intensive, and statistically invalid approach to ensuring that imported foods are not contaminated with unsafe levels of chemicals. We recommended that the Congress change the federal government’s role in ensuring food safety by moving away from end-product testing to an approach preventing contamination from occurring, such as the use of HACCP in production processes. In addition, we suggested the Congress consider requiring that all imported foods be produced under equivalent food safety systems. HACCP is now required for some products, such as seafood, and the Congress is considering legislation to provide FDA with equivalency authority. The capabilities of FDA’s inspection approach to protect consumers from unsafe products has been further called into question by the agency’s inability to keep pace with rising import levels. Between 1992 and 1997, the number of imported food entries more than doubled, from 1.1 million to 2.7 million. As workloads increased, resources devoted to inspecting imported foods declined by 22 percent, from 328 staff years for inspectors in 1992 to 257 staff years for inspectors in 1997; thus, the average number of annual food shipments each inspector was responsible for increased from about 3,350 to about 10,500. As a result of these and other factors, FDA’s inspection coverage of imported food entries has fallen from an estimated 8 percent of food entries in fiscal year 1992 to 1.7 percent in fiscal year 1997. Of the 2.7 million total food entries in 1997, 56 percent were released after FDA’s automated screening system reviewed the import information, 42.3 percent were released after an inspector reviewed electronic information or import documents, and the remaining 1.7 percent were held for inspection. Of the 1.7 percent held for inspection (46,295 entries), FDA conducted laboratory analyses on 16,048 entries, or 0.6 percent of the total number of food entries. (See table 2.1.) In contrast to the growing demands placed on FDA’s inspection resources, FSIS’ import inspectors have a more manageable and stable inspection burden. The number of import entries per FSIS inspector rose from about 1,236 in calendar year 1992 to about 1,645 in 1997. In addition to visually checking every shipment, FSIS performed more intensive inspections on about 20.2 percent of the 118,000 entries in 1997, somewhat less than its rate of 26.9 percent in 1992. FSIS also visited 30 countries and conducted 336 foreign plant inspections in 1997 as part of its ongoing equivalency reviews. Given its lack of authority to require equivalency in foreign food safety systems, FDA relies primarily on port-of-entry inspections and tests to ensure the safety of imported foods. Because such port-of-entry inspection and testing has been widely discredited as an effective means for ensuring safety, FDA cannot realistically ensure that unsafe foods are kept out of U.S. commerce. Even if FDA could inspect more shipments at the ports of entry than it currently does, such an approach would still lack assurance that imported foods are picked, processed, and packed under sanitary conditions. An equivalency requirement would allow FDA to shift the primary burden of ensuring safety to the exporting country while achieving better assurance that food production and processing is safe and sanitary. To strengthen FDA’s ability to ensure the safety of imported foods, we recommend that the Congress require all food eligible for importation to the United States, not just meat and poultry, be produced under equivalent food safety systems. In commenting on a draft of this report, FDA agreed that it needs equivalency authority to control the safety of imported foods, but it did not agree that equivalence should be a requirement for the entry of imported foods. FDA believes the authority should be discretionary, not mandatory, so that equivalency could be applied where it is most appropriate without disrupting trade. We believe that equivalency should be mandatory for all imported foods and could be implemented in a manner that would not unnecessarily or unfairly disrupt trade. Mandatory authority to require equivalency would address weaknesses in FDA’s port-of-entry inspection approach, enable FDA to leverage its staff resources by sharing the responsibility for food safety with the exporting countries, and compel FDA to take a proactive approach in preventing food safety problems instead of requiring equivalency after problems are identified. The Congress could provide reasonable time frames that would allow equivalency to be implemented over a number of years. FDA and CDC provided technical comments that we incorporated where appropriate. FSIS and FDA are not deploying their inspection resources to maximum advantage. With respect to FSIS, it is misdirecting some of its resources by targeting its inspections on the basis of all past violations—most of which are less concerned with food safety, such as missing shipping labels—rather than by focusing on violations directly related to food safety, such as contamination and decomposition. As a result, FSIS’ resources are not being focused on imported foods posing the greater safety risk. With respect to FDA, its system for identifying shipments for inspection is hampered by work plans that do not set clear priorities for inspectors in making selection decisions, a failure to make relevant health risk data readily available to its inspectors to help them select shipments to inspect, and a failure to ensure that importer-provided information on incoming shipments is accurate. Nationwide, FDA also cannot be assured that its limited resources are consistently targeting shipments posing the greater health risks. FSIS’ Automated Import Information System (AIIS) targets shipments for more intensive inspections and testing mainly on the basis of the violation history associated with the foreign firm producing the imported product. This overall violation history may be misleading, however, because AIIS treats all violations equally, except for transportation damage, in determining how much inspection attention will be provided to an importing firm’s products. As a result, violations not usually posing a direct health risk to consumers—such as a missing shipping label, incorrect weight, and misidentified product—could trigger a requirement for the agency to inspect every shipment from a foreign firm until the firm reestablished a good track record. In 1996, about 86 percent of the refused shipments, excluding those refused for transportation damage, were not directly related to health risks. These violations triggered a series of inspections on subsequent shipments of the same product from the same exporting firm until at least 10 consecutive shipments were found to be in compliance. When limited resources are targeted in this fashion, fewer resources are available for products posing the greater health risk. FSIS stores the test results associated with previous inspections of imported foods—data that would help identify shipments with the highest health risks—in AIIS, its automated screening system. However, the system does not use this information to identify patterns of violations, such as firms or countries with repeated problems, that are directly related to food safety. FSIS could further improve its automated screening system if it developed information on patterns of violations, which would allow it to determine whether Salmonella contamination, for example, was a recurrent problem in a particular country or exported product and increase its inspection frequencies for such shipments. In addition, FSIS could work with the exporting country to determine the extent of the problem and to take actions to correct it. FDA’s system for identifying shipments that should be targeted for inspection is undermined by problems in three key areas. First, FDA’s annual work plan, which contains the number of inspections and tests each FDA district is to conduct, is not realistic. FDA inspectors attempt to use these numbers to guide their decisions on which products to inspect and test. Second, FDA’s inspectors cannot readily obtain available health risk data that would help them choose the shipments likely to pose health risks. Third, FDA does not act to ensure that importer-provided information, which its screening system relies on to identify a shipment’s contents, is correct. As a result of these problems, FDA’s inspectors at ports of entry, working under significant time pressures to move shipments quickly into domestic commerce, make subjective decisions that may not target the riskiest shipments. FDA’s annual work plan sets the number of activities, such as the number of inspections and tests, each FDA district is to conduct for the 10 specific food programs that cover imports. These programs, such as seafood, imported low-acid canned food, or imported cheese, are consolidated under the four major project areas related to food safety—Foodborne Biological Hazards, Pesticides and Chemical Contaminants, Molecular Biology and Natural Toxins, and Food Color and Additives. For example, for FDA’s Seattle District, the fiscal year 1997 work plan called for 165 inspections and 583 laboratory tests of imported seafood products. For imported seafood products nationwide, the work plan called for 2,500 inspections and 9,432 laboratory tests. Each day, FDA inspectors must decide which shipments of food imports to inspect. The inspectors at the locations we visited typically attempt to select shipments on the basis of the work plan’s targets. However, regional and district FDA officials told us that the numbers for inspections and tests contained in the work plan were not realistic because they did not take into account the time required to investigate emergencies and consumer complaints, which invariably occur. In 1997, for example, FDA spent 6,274 hours investigating the outbreaks associated with Guatemalan raspberries—time not accounted for in the work plan. As a result, FDA inspectors are not able to complete the work plan and compliance program activities and therefore rely on their judgment when determining what to inspect and test. Meeting the annual work plan targets is a problem nationwide. Table 3.1 shows the degree to which FDA inspectors fell short of completing the number of planned inspections and tests for fiscal years 1996 and 1997 in the four areas related to food safety. For example, in fiscal year 1997, 23,000 inspections and 19,432 laboratory analyses were planned for foodborne biological hazards. However, FDA was only able to conduct 11,587 inspections and 12,874 analyses. As a result, the inspections and tests conducted varied significantly among project areas. Inspectors use their own judgment in making decisions on inspections and laboratory analyses. We found that this judgment is highly subjective. For example, one inspector told us he believed one country did not have sanitary facilities and therefore assumed that all food products imported from that country are contaminated with filth. During our visit, he routinely selected samples of food from that country for filth tests, although the laboratory staff told us filth tests were not a high priority and, in fact, they sometimes did not conduct the tests because they already had a backlog of tests to conduct. Therefore, to the extent that the laboratory analyses were not conducted, the inspector wasted time collecting the samples. FDA retains information in a number of databases on the health risks presented by certain foods from a particular exporting country and/or an exporting company. These data include the results of the laboratory tests that FDA conducts on imported foods and lists of foreign products to be detained because they have a history of violations. In addition, FDA maintains lists of foreign plants that have registered with FDA their processes for producing low-acid canned foods and acidified canned foods. If these products have not been produced with a registered process, they are banned from entry. With respect to laboratory tests, FDA has not integrated its laboratory database with its OASIS system, the system used to screen imports. Therefore, inspectors do not have available the results of prior laboratory tests when considering possible actions to inspect imported products. FDA plans to integrate the laboratory database with OASIS in fiscal year 1998 to make better use of staff resources in targeting defective and dangerous products. Furthermore, FDA inspectors do not have ready access to some useful data in OASIS when deciding which products to inspect. For example, inspectors can obtain information on prior violations by foreign plants or countries, but the process for doing so can be cumbersome and time-consuming. To obtain these data, inspectors have to close their OASIS database and open another database. We observed two inspectors going through this process—which took 3 to 10 minutes per shipment—at a time when one of these inspectors had to process as many as 200 shipments per day. Not all inspectors will change databases to look for this information. Instead, inspectors told us they often rely on their memory of the information in the database or notes. Similarly, to obtain information on foreign registrations, inspectors have to close OASIS and open the registration database. Again, some inspectors find the process time-consuming and accordingly often choose to rely on memory. Because inspectors have these difficulties in obtaining needed data on health-related risks and are under time pressures, they may make decisions to select samples on the basis of incomplete information. FDA has recognized the problems associated with difficulties in obtaining health risk data. In a 1993 hearing on food imports, FDA’s Director of the New York District Office stated that FDA tries to funnel its limited inspection resources towards the imports that pose the greater risk and have the greatest likelihood of being adulterated or misbranded. He added that including information, such as the data discussed above, in OASIS would be very useful in helping FDA inspectors make daily decisions on which import shipments to inspect and test. Two years later, in a 1995 FDA internal review, FDA’s automated system was criticized for not providing inspectors with a means for accessing other FDA databases, such as the FDA Import Alert Retrieval System database. The review said that such access would improve inspectors’ efficiency in identifying shipments that need to be detained. According to FDA officials, the agency received money to make these improvements in the screening system in fiscal year 1998 and will begin integrating the databases (Laboratory Management System, FDA Import Alert Retrieval System, and Low-Acid Canned Food database) with OASIS this year. To facilitate the entry of imported foods under FDA’s jurisdiction, importers enter data electronically on incoming shipments into OASIS after demonstrating competency with the system. Electronic filers that do not routinely have to provide actual shipping documents to FDA are called paperless filers. FDA inspectors rely on this electronic information in making their selections for inspections and laboratory analyses. To ensure the accuracy of this information, FDA periodically requests the paperless filers to provide shipping documents on a sample of entries, and FDA then compares these documents against the electronically provided information for errors. Errors can include incorrectly identifying a product as exempt from FDA’s regulation, entering the wrong FDA product code, or listing the wrong country of origin. Electronic filers exceeding the allowed 10-percent error rate may be removed from paperless status. However, FDA records show that no corrective actions have been taken to remove even the most error-prone paperless filers from paperless status. According to a January 1998 FDA survey, 306, or 14.5 percent, of the 2,114 paperless filers audited had errors rates of 10 percent or greater, but none of these filers were removed from paperless status. For example, the paperless filer error rates for the New York District were 10 percent or more in 133 of the 251 audits conducted, but no electronic filers were removed from paperless status. Similarly, as of November 1997, none of the 16 electronic filers at the Miami field location with error rates of 10 percent or greater were removed from paperless filer status. In fact, the filer with the highest error rate—20 percent—has remained in paperless status without any follow-up audits since April 1996. FDA officials at three locations we visited believed the error rates were high primarily because the product codes are complex for the importers to learn and use. In one case, for example, we found that an importer had incorrectly entered the code for spaghetti, a form of pasta, instead of cappelletti, another form of pasta. The failure to take corrective actions to remove filers from paperless status, as found in the January 1998 FDA survey, could affect decisions on selections for investigating food safety risks. Importers aware of FDA’s inaction could evade FDA’s inspections by incorrectly describing the contents of a shipment. For example, an FDA inspector at one port of entry said that, while most errors are accidental, he has encountered problems with importers who appeared to deliberately avoid FDA’s inspections by using the wrong product code for swordfish, which is automatically held until the importer provides laboratory test results demonstrating that the product complies with U.S. standards. By entering a code for another type of fish, the importers hope that the on-screen review will not detect a discrepancy and the shipment will not be selected for inspection. Following an FDA investigation in 1993, an importer was prosecuted for deliberately misrepresenting imported foods. The importer was found guilty on 138 counts, mostly of misrepresenting the source of seafood in an attempt to avoid FDA’s automatic detention. FDA inspectors told us that when they encounter entry errors during evaluations, they inform the importer of the errors and offer help on entering the correct information. Even when these inspectors occasionally find incorrect entries that appear to be deliberate misrepresentations, they work with the importer to correct the entry problems and, in most cases, do not investigate the suspect filers further. They said that they view their role as teachers, not investigators. Given the small fraction of import entries that FDA and FSIS can inspect, the agencies need to make the best use of all the information available to help select the right shipments to review. Both agencies have information to identify relationships between foodborne pathogens and specific food products, which would be a good indicator of the food safety risks associated with import shipments, but neither agency has used the information effectively or efficiently. As a result, FSIS is using its limited inspection resources to conduct inspections and tests triggered by violations that may not be related to safety. In addition, FDA’s limited inspection resources may not be targeted to the riskiest shipments for a number of reasons. Reliance by FDA field offices on numerical inspection targets that are not closely linked to the risk-based priorities identified in the compliance programs impedes inspectors’ effectiveness in selecting imported food shipments for inspections and tests, key information on firms and products is not easily accessible and thus may be overlooked, and a shipment’s contents may be misrepresented. To help FSIS better identify the risks associated with specific foods and thereby further improve the Automated Import Information System’s usefulness in selecting high-risk products to inspect, we recommend that the Secretary of Agriculture direct the Administrator, FSIS, to modify the Automated Import Information System so that the system can identify patterns between laboratory test results and specific foods, foreign firms, and exporting countries. To provide more accurate and accessible information to FDA and thus minimize inconsistencies in inspectors’ subjective decisions, we recommend that the Secretary of Health and Human Services direct the Commissioner, FDA, to clarify and emphasize the guidance inspectors should use when making decisions on which shipments to inspect and test; modify the Operational and Administrative System for Import Support system so that (1) it automatically reviews the Import Alert and Low-Acid Canned Food databases and recommends appropriate actions to inspectors and (2) inspectors can consider previous laboratory test results, which are stored in the Laboratory Management System database, in choosing shipments for inspections and tests; and ensure that the field offices are taking appropriate corrective action, when warranted, against importers that repeatedly enter incorrect shipping information into the Operational and Administrative System for Import Support database. In commenting on a draft of this report, FSIS agreed with our recommendation. The agency stated that it will be evaluating its port-of-entry inspection procedures and its automated systems, and will consider our recommendation during this evaluation. FDA agreed with our recommendation to link three databases— the Import Alert database, the Low-Acid Canned Food database, and the laboratory database— to its automated import screening system, the Operational and Administrative System for Import Support (OASIS), for use by inspectors when choosing shipments for inspections and tests. FDA stated that the automatic review of the Import Alert database and the Low-Acid Canned Food database is under development. The agency stated further that it is developing software that will allow inspectors to review previous laboratory test results through OASIS. FDA expects all these improvements will be completed and operating by the end of fiscal year 1998. FDA also agreed with our recommendation to ensure that district offices are taking appropriate corrective action against importers that repeatedly enter incorrect shipping information in OASIS. FDA also generally agreed with the report’s recommendation regarding its import screening system. FDA described planned actions to improve the efficiency of its automated import screening system and to take appropriate corrective actions in its electronic filer program. FDA did not agree with our characterization of its system for communicating inspection priorities to its inspectors or the associated recommendation in our draft report to improve this system. Specifically, FDA said that its annual work plan and compliance programs provide sufficient guidance to inspectors to help them make decisions about which shipments to inspect. We continue to believe that the priority-setting guidance provided to inspectors, even as it is described in FDA’s comments, is confusing and inconsistent. As a result, inspectors may not be selecting shipments to inspect that pose the greater food safety risk to consumers. We have, however, modified our recommendation to better reflect the nature of the problem and to give FDA more flexibility to address it. We also incorporated technical comments from FSIS and FDA where appropriate. In addition to the problems associated with its automated system for selecting food shipments for inspection, FDA has several weaknesses in its controls over imported products that have enabled some importers or their representatives to sell unsafe foods in the United States. First, FDA’s system for automatically detaining suspicious products pending testing to confirm their safety may be easily subverted because FDA does not maintain control over the testing process. By allowing importers to choose their own laboratories to select samples and perform tests, FDA opens itself to the possibility of approving the entry of unsafe products on the basis of falsified test results. Second, FDA does not maintain control over products before releasing them into U.S. commerce. As a result, some importers have sent products to grocery stores before FDA has approved their release, and others have not returned and properly disposed of products that FDA has conditionally released but called back after testing showed them to be contaminated. In this connection, importers that violate FDA’s and Customs’ controls are frequently not penalized to deter such actions. FDA’s system for controlling the importation of unsafe foods has a history of circumvention by certain unscrupulous importers. For example, we reported in 1992 that about 10 importers had repeatedly distributed pesticide-adulterated shipments in disregard of FDA orders; in total, these importers distributed 73 shipments known to have been adulterated. In all, about a third of the adulterated shipments that were identified reached the market. A 1997 investigation by Customs confirmed that importers continue to evade import controls. Recognizing problems in controlling imported shipments, Customs launched a special operation at the port of San Francisco in 1997, known as Operation Bad Apple. Customs officials told us that of the shipments FDA ordered returned to Customs for destruction or reexport, 40 percent were never redelivered, and for half of those that were redelivered, other products had been substituted for the original contaminated products. Thus, 70 percent of the shipments ordered returned because they were unsafe presumably entered into commerce, contrary to FDA’s orders. FDA and Customs officials developed a joint task force in November 1997, called CLEAN (Closing Loopholes to Ensure Acceptable Nutrition), to address the problems identified in Operation Bad Apple. FDA’s automatic detention system is subject to evasion by unscrupulous importers. FDA automatically detains imported foods that, on the basis of prior violations, have a high potential for being contaminated. In these cases, rather than destroying or exporting the products, importers have the option of presenting the results of a private laboratory test to show that the detained products meet U.S. standards. However, FDA generally does not control the selection of the samples tested and cannot restrict the choice of the laboratories used to conduct the tests. According to FDA, the agency lacks explicit authority to require the use of specific laboratories importers can use. As such, importers can choose the laboratory, which selects the sample and conducts the analysis. While FDA expects these laboratories to comply with the agency’s written guidance for collecting samples and performing tests, the agency generally does not control the selection of samples or witness laboratory analyses. This approach exposes FDA to the possibility that it will accept falsified test results or results from tests using improperly selected samples as a basis for releasing products into domestic commerce. In fiscal year 1997, FDA detained 7,874 import shipments automatically. While FDA does not keep specific records, FDA officials said most shipments detained automatically are released after importers present their private laboratory results. Customs and FDA officials are concerned about monitoring the accuracy of private laboratories chosen by importers in selecting and analyzing samples of imported foods that are on automatic detention status. Some Customs inspectors voiced concerns that some unscrupulous importers, to ensure their products meet U.S. requirements, share shipments that have already been tested and proven to be in compliance for sampling purposes—a concept referred to as “banking.” FDA inspectors were also concerned about the uncontrolled sampling and testing of imported foods under FDA’s jurisdiction. To verify the accuracy of tests performed by private laboratories, FDA laboratories occasionally select samples from the same shipments and perform identical tests. Officials at two field locations we visited told us that the FDA laboratories, in performing these tests, discovered violations that the private laboratory tests did not identify. FDA is further increasing its reliance on the use of private laboratories for analyzing imported foods normally tested by FDA laboratories. Specifically, according to FDA’s Procedures Manual, the increased scrutiny of import commodities and limitations on FDA resources are likely; therefore, FDA will expedite its enforcement efforts by using scientifically sound data provided by private laboratories to determine if products should be allowed entry. In this regard, FDA is testing a new process to allow seafood importers the option of having a private laboratory select and analyze seafood samples for FDA’s routine review of imported seafood. Under a pilot program at the Los Angeles District Office, if FDA selects the shipment for laboratory analysis, it will identify the product lots and sample sizes, and specify the type of analysis to be conducted, and the importer will choose the laboratory that will collect the samples and conduct the analysis. While FDA is generally increasing its reliance on the test results of samples selected and analyzed by private laboratories, it has recognized that the practice of allowing importers to select their own product samples for testing is questionable. In this regard, importers of Guatemalan snow peas must now use third-party companies to select the laboratory samples because FDA test results have differed historically from the results of the importers’ selected laboratory. In response to an internal report on the use of private laboratories, FDA approved new guidelines in March 1998 on the review of test results prepared by private laboratories. According to the guidelines, sample selection and laboratory analysis should be conducted by an independent party. Imported foods under FDA’s jurisdiction, including foods that are of concern or are proven to be adulterated, are sold in domestic commerce before FDA has released them. This occurs because (1) importers either sell imported products before FDA has had a chance to inspect them or do not properly dispose of products that FDA has found to violate U.S. standards and (2) penalties against importers have not effectively deterred such actions. FDA-regulated foods are not controlled prior to inspection and release. Under the Federal Food, Drug, and Cosmetics Act, importers of FDA-regulated foods generally retain possession of the imported food shipments until FDA releases them and must make the shipments available for FDA’s inspection if requested. In some cases, particularly for perishable items, FDA will select samples for testing and allow the shipments to continue in domestic transit—on the condition that the shipment be returned if FDA finds the shipment to be adulterated and refuses entry. If importers of foods that FDA has refused entry cannot recondition the products to bring them into compliance with requirements, they have 90 days to (1) destroy the products or (2) reexport the products. The Customs Service is required to witness or attest to the fact that the refused shipment was disposed of properly, but FDA does not stamp “refused entry” on shipments found to violate safety standards, and it generally does not notify the destination country when such shipments are being reexported. According to FDA officials, FDA does not stamp refused shipments because it lacks the statutory authority to do so. At the ports we visited, imported food shipments under FDA’s jurisdiction often entered U.S. commerce before being delivered to FDA for inspection or were not properly disposed of when refused entry. For example, in Operation Bad Apple, which lasted 3 weeks, Customs officials identified 23 weaknesses in the controls over FDA-regulated imported foods. In this operation, Customs officials cited the following examples to illustrate these weaknesses. Substituting cargo that was en route to a holding area. On a shipment of frozen shrimp, Customs alleged that the importer removed a portion of the shipment that had thawed during transport before making the shipment available for FDA’s inspection. If the thawed shrimp had not been removed, FDA would have refused entry for the entire shipment because the thawing indicated that the proper temperature controls were not maintained during transport, and thus the entire shipment may be contaminated. Not meeting FDA’s request that the shipment be redelivered to Customs for disposition. According to Customs, about 40 percent of the imported foods released conditionally by FDA were found to violate U.S. standards during Operation Bad Apple, but were never redelivered to Customs. That is, they presumably entered into commerce and were not destroyed or reexported as required. Even when the shipments found to violate U.S. standards were redelivered, Customs officials said other products had been substituted for the violative products in about 50 percent of the shipments before redelivery. We found similar results for the nondelivery of shipments in 1992, when we reported that 60 percent of the perishable foods and 38 percent of the nonperishable foods that FDA found adulterated with illegal pesticides were released into U.S. markets and not returned to Customs for destruction or reexport. Our work suggests that the evasion of imported food controls appears not to be isolated to a few importers at one port of entry. As part of Operation Bad Apple, Customs officials monitored cargo transferred from the vessel to the holding area, FDA sampled and tested the products, and did not give any conditional releases. Overall, while about 25 percent of the importers were viewed as suspicious, Customs anticipated that only 1 percent of these would be found to be evading controls. However, according to Customs officials, all of the “suspicious” importers were found to be out of compliance, and 25 percent of the other importers were also out of compliance. FDA and Customs officials told us that substitution of imported products or failure to redeliver products for inspection has been occurring at other ports. Some Customs officials said they lack the resources needed to witness and thus ensure proper disposition of violative products refused entry. Accordingly, they generally verify only the number of containers—e.g., three containers were refused entry and three containers were reexported. Similarly, they frequently do not witness the destruction of the violative product and instead rely on a receipt from the landfill where it was disposed of. According to Customs officials, their regulations allow them to accept a receipt in lieu of witnessing the shipment’s destruction. In addition to FDA’s difficulties in controlling imported foods prior to releasing them into domestic commerce, FDA’s economic deterrent to noncompliance with its requirements is inadequate. Lacking the authority to fine importers who distribute adulterated food shipments or fail to retain shipments for inspection, FDA relies on a bond agreement between Customs and the importer, for those shipments valued at more than $1,250 as a way to achieve compliance. Under the bond agreement, importers are required to pay all duties, taxes, and charges; to retain control over the shipment; and to properly dispose of the shipment if it is found to be unacceptable. The bond amount is based on the importer’s declared value of the imported shipment, and penalties may be assessed at up to three times the value of the bond. However, we reported in 1992 that sometimes even assessed damages of three times the value of the shipment may not deter the illegal sale of imported goods because the value of the goods on the market is greater than the tripled bond amount. Customs often does not collect full damages from importers that fail to comply with FDA’s requirements. For example, in fiscal year 1997, Customs in Miami assessed and collected damages for about only 25 percent of the identified cases involving the improper distribution of food products for the previous 12 months. Customs and FDA attributed the low figure to (1) lax controls in communicating information about refused shipments between Customs and FDA, (2) unclear guidance for handling the shipments by Customs officials, (3) a malfunction of the Customs computer system for storing case files, and (4) a halt in collections pending the resolution of a court case involving the collection of liquidated damages. Even when damages were assessed, they were generally reduced to about 2 percent of the original assessment. For example, in one case, the damages were $100,000, based on the declared value of the import shipment, but Customs reduced the amount to $100. According to Customs headquarters officials, any reduction in damages must be in accordance with Customs guidelines, and both Customs and FDA must agree to reduce the damages when they involve the failure to redeliver shipments that were refused entry because they violated product purity and labeling requirements. FDA’s lack of authority to impose civil penalties, and its reliance on the importer’s bond agreement with Customs, have left the agency without an adequate economic deterrent to the distribution of adulterated imports. We reported in 1992 that in fiscal years 1988 through 1990, importers at four locations had distributed 336 (34 percent) of the 989 shipments found to be adulterated with pesticides. Although this rate was lower than the rates of 50 percent and 45 percent that we found in 1979 and 1988, respectively, it indicated that adulterated imports continue to be distributed to American consumers. We recommended in that report and others that FDA be given authority to issue civil penalties to violators.While FDA submitted legislative proposals seeking civil penalty authority in 1993, the Congress did not pass the legislation. FDA’s lack of controls over shipments selected for inspection leaves its inspection system vulnerable to unscrupulous importers. Without sufficient controls, some importers (1) may falsify laboratory test results on suspect foods to obtain an FDA release, (2) sell potentially unsafe imported foods before FDA can inspect them, and (3) sell imported foods that FDA found violative and barred from entry. Furthermore, importers’ bonds are an ineffective deterrent against attempts to market contaminated products. As a result, FDA has little assurance that contaminated shipments are kept off U.S. grocery shelves, and it appears likely that certain importers will continue to circumvent controls over unsafe food products with impunity. We are making no recommendations at this time because, as agreed with the Chairman, Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs, we are continuing work to identify specific actions needed to strengthen the controls over imported foods. In commenting on a draft of this report, FDA agreed that it needs to exercise control over the practice of permitting importers to select a private laboratory to test shipments automatically detained due to a history of violations. FDA stated that it is issuing new instructions to its district offices regarding the use of independent laboratories. However, FDA further noted that the agency lacks the explicit authority to require importers to use certain laboratories or to provide a list of accredited laboratories to importers. Customs provided comments to correct or clarify information about its responsibilities and practices. Customs stated that it is impossible to physically inspect the destruction or export of every refused shipment and said it is more logical to target their resources to those shipments and suspected importers posing the greater risk for noncompliance. Customs said the extent of substitution is probably limited to certain products and a small number of importers. However, we found that the substitution of products for inspection has occurred at ports of entry other than in the San Francisco example we provided. FDA and Customs officials have also acknowledged that substitution is occurring at other ports, although neither we nor they know the full extent of its occurrence. Finally, Customs disagreed with our statement that violators are seldom punished effectively and the damages against violators do not represent an effective deterrent; Customs stated that the current damages assessed against violators are adequate in most cases. However, on the basis of our work extending back to 1992, we have found that liquidated damages do not appear to be an effective deterrent. In 1992, for example, we reported that the U.S. market value for selected products always exceeded the declared import value of the products we surveyed; thus, importers could and, in some cases, did profit from distributing refused products even after paying damages to Customs. The example we mention in this report, in which Customs assessed damages of $100 against an importer with a shipment having a declared value of $100,000, shows that the collected damages may be far less than the declared value of the shipment. We added information in the report to explain that, according to Customs officials in Washington, D.C., any decision to mitigate damages against importers for failure to redeliver shipments that were refused entry because of product purity or labeling problems requires agreement by both Customs and FDA. The Centers for Disease Control and Prevention (CDC) has linked several significant foodborne outbreaks to imported foods (see table I.1). According to CDC officials, the agency’s investigation of recent outbreaks related to imported foods may indicate that food safety problems are more widespread than previously believed. For example, in the spring of 1996, multiple health departments reported cases of illness from Cyclospora, a pathogen that had not previously been proven to be transmitted by food. CDC and other public health officials were able to link illnesses from Cyclospora with raspberries from Guatemala; more than 1,000 people in various locations in the United States and Canada were affected. In 1997, additional illnesses from Cyclospora, also affecting more than 1,000 people, were also linked with raspberries from Guatemala. CDC and state and local health departments are not able to identify all cases of foodborne illness, however, because such illnesses are underreported and are difficult to trace to their source. Mexico (implicated) Shigella flexneri, type 6 (SF6) Mexico (suspected) Raw limpets (molluscan shellfish) Histamine poisoning (Scombroid) Mexico (suspected) E. coli O27:H20 4 states and Washington, D.C. As of January 1, 1998, the Food Safety and Inspection Service (FSIS) had determined that the countries listed below have food inspection systems equivalent to the United States’ and are eligible to export meat and/or poultry products to this country. Since January 1, 1998, FSIS has suspended Paraguay from exporting meat and poultry products to the United States because its inspection system was not adequate to prevent contamination on repeated shipments. The following are GAO’s comments on the Food and Drug Administration’s letter dated April 3, 1998. 1. While we agree with FDA that the compliance programs contain specific guidance on inspection requirements, we found that FDA inspectors rely on the numerical inspection targets set forth in the annual work plan for guidance. These targets are sometimes inconsistent with the directions for the compliance program. We agree that FDA needs flexibility to deal with emergencies as they arise, but we disagree that the current work plan “clearly reflects priorities.” The inconsistency we identified often leads inspectors to rely on subjective judgment, which may lead to inspectors’ selecting shipments that do not pose the greater food safety risk to consumers. 2. We have not evaluated nor endorsed this legislation. Instead, this report addresses the need for FDA’s equivalency authority. This authority would enable FDA to shift the primary responsibility for ensuring the safety of imported foods to the exporting country and to make more efficient and effective use of its limited resources. 3. We have modified the report to reflect FDA’s comment that it does not have explicit authority to require importers to use certain laboratories nor to provide a list of accredited laboratories to importers. 4. Our recommendation was not intended to require the immediate implementation of equivalency requirements. Instead, we envision that such equivalency requirements would be phased in over time in a manner that would not unnecessarily disrupt trade. The mandatory authority to require equivalency would address weaknesses in FDA’s approach to inspections at the port of entry, enable FDA to leverage its staff resources by sharing the responsibility for food safety with the exporting countries, and compel FDA to take an active approach in preventing food safety problems instead of requiring equivalency after problems are identified. The Congress could provide reasonable time frames that would allow equivalency to be implemented over a number of years. We modified the report to address FDA’s technical comments where appropriate. The following is GAO’s comment on the Food Safety and Inspection Service’s letter dated April 7, 1998. 1. In response to FSIS’ comment, we (1) expanded the list of reasons for refusal that are directly related to health risks to include unsound condition and residues, as FSIS cited in its comments, and (2) excluded all refusals resulting from transportation damage because FSIS officials said these refusals do not trigger requirements for FSIS to conduct subsequent inspections. Using this expanded definition, we recalculated the percentage of rejected shipments that were not directly related to health risk. As a result, in our final report, we changed the percentage of refused shipments not related to health risk from 97 percent to 86 percent. The following are GAO’s comments on the U.S. Customs Services’ letter dated April 6, 1998. 1. We disagree with Customs’ comment questioning our assertion about the extent to which importers substitute safe food products for imported products for inspection. Customs officials in San Francisco provided us the figures on import substitution to illustrate the weaknesses in controls over FDA-regulated imported foods found in Operation Bad Apple. We modified the language in the report to clarify that the 50-percent substitution rate was attributed to Operation Bad Apple. Furthermore, while we cannot report on the exact extent of product substitution, Customs and FDA officials have acknowledged that it is occurring at other ports of entry. We also found that product substitution was occurring at four of the six ports we visited. 2. We have expanded the report to reflect Customs’ comment on the reasons for a decrease in collections at the Miami port of entry. 3. We do not share Customs’ view that the current liquidated damage assessment for failure to redeliver contaminated food products is an adequate deterrent. Our work, beginning in 1992, indicates a pattern of problems in the deterrence and punishment of violators. In 1992, for example, we reported that the U.S. market value for selected products always exceeded the declared import value of the products we surveyed; thus, importers could and, in some cases, did profit from distributing illegal products even after paying damages to Customs. The case we mentioned in this report, in which Customs assessed damages of $100 against an importer with a shipment having a declared value of $100,000, shows that the collected damages may be far less than the declared value of the shipment. We modified the report to provide further information on the reason for mitigating damages against importers. Keith W. Oleson, Assistant Director Dennis Richards, Project Leader Daniel F. Alspaugh Judy K. Hoovler John M. Nicholson, Jr. Carol Herrnstadt Shulman Jonathan M. Silverman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed efforts of federal programs to ensure the safety of food imports, focusing on the: (1) differences in the agencies' authorities and approaches for ensuring the safety of imported foods; (2) agencies' efforts to target their resources on foods posing risks; and (3) weaknesses in the controls over imported foods. GAO noted that: (1) federal agencies cannot ensure that the growing volume of imported foods is safe for consumers; (2) although the Food Safety and Inspection Service (FSIS) and the Food and Drug Administration (FDA) require imported foods to meet the same standards as domestic foods, their approaches to enforcing these requirements differ; (3) by law, FSIS places the principal burden for safety on the exporting countries by allowing imports only from those countries with food safety systems it deems to be equivalent to the U.S. system; (4) FDA, lacking such legal authority, allows food imports from almost any country and takes on the burden of ensuring the safety of imported foods as they arrive at U.S. ports of entry; (5) relying on port-of-entry inspections to detect and prevent unsafe foods is ineffective, given that: (a) this approach does not ensure that foods are produced under adequately controlled conditions; (b) FDA currently inspects less than 2 percent of all foreign shipments; and (c) inspection will not detect some organisms, such as Cyclospora, for which visual inspections and laboratory tests are inadequate; (6) FSIS and FDA are not deploying their inspection resources to maximum advantage; (7) FSIS focuses its inspection and testing resources on shipments from exporting firms with a history of violations; (8) however, many of the violations may bear little relationship to food safety; (9) using available data on health-related risks from shipments that do not meet U.S. standards could help FSIS focus more closely on the imports posing the greater risks; (10) FDA's annual work plan does not set achievable targets for inspection activities; (11) as a result, inspectors do not have clear guidance for conducting inspections; (12) FDA does not make health risk data readily available to guide inspectors' selections; (13) when making decisions on which shipments to inspect, FDA relies on importers' descriptions of shipments' contents, which are often incorrect; (14) FDA's procedures for ensuring that unsafe imported foods do not reach U.S. consumers are vulnerable to abuse by unscrupulous importers; (15) in some cases, when FDA decides to inspect shipments, the importers have already marketed the goods; (16) in other cases, when FDA finds contamination and calls for importers to return shipments to the Customs Service for destruction or reexport, importers ignore this requirement or substitute other goods for the original shipment; and (17) such cases of noncompliance seldom result in a significant penalty.
The 1990 amendments to the Clean Air Act require the use of reformulated gasoline (RFG) in nine areas of the United States with severe ozone pollution. The act set up a two-phase program. Under phase I, beginning in January 1, 1995, volatile organic emissions and toxic air pollutants are to be reduced by 15 percent. During phase II of the RFG program, to start in the year 2000, EPA’s rules require reductions of 5.5 percent of nitrogen oxides along with further reductions in volatile organic and toxic emissions. As an emission control measure, areas that have less severe ozone problems but that still exceed the standards may also use RFG to reduce pollution problems. Oxygenates are compounds that deliver oxygen to gasoline in various concentrations. As part of the required reformulation process, oxygenates must be added to gasoline to make up 2 percent of the finished product’s weight. A minimum of 2.7 percent oxygen is also required to be added to gasoline sold in 39 areas of the country to reduce carbon monoxide levels during the winter. In the form of ethanol, oxygenate is also blended with conventional gasoline to make gasohol—a gasoline extender and an octane enhancer. Biofuels are alcohols, such as ethanol or other chemicals, derived from biomass or living matter. Current research is focused on developing biofuels from the starch in corn kernels or from the fibrous cellulosic materials in the rest of the corn plant; it also focuses on cellulosic plants, such as fast-growing trees or grasses, and waste products, such as agricultural and forestry residues and municipal and industrial wastes. A glossary of terms appears at the end of this report. The following sections summarize the results of studies on the cost-effectiveness of RFG compared to other control options and the estimates for the price of RFG used in the various studies that we reviewed compared with the actual prices experienced. Studies done by EPA, the American Petroleum Institute, Radian Corporation, and Sierra Research, Inc., in conjunction with Charles River Associates, suggest that RFG may be cost-effective when compared with some pollution control measures but less cost-effective than other measures. However, significant differences in the studies’ objectives, methodologies, time frames covered, costs considered, types and extent of pollutants considered, and other factors produced widely varying estimates of costs per ton of pollutant removed, a common cost-effectiveness measure. Also, each of the studies evaluated somewhat different control measures and made different assumptions about the extent of the pollution and control measures already in use. These differences make comparisons of results between the studies very difficult. (App. II identifies the four studies that we reviewed and contains tables showing the cost-effectiveness estimates that were made by the various organizations.) For example, EPA estimates that removing volatile organic compounds using available control measures would cost from about $600 to $6,000 per ton of compounds removed. Specifically, EPA estimates that it would cost about $600 per ton for phase II of the RFG program; $1,300 per ton for enhanced automobile inspection and maintenance programs; $2,000 per ton for on-board diagnostic requirements for automobiles; $5,400 per ton for the basic automobile emission inspection and maintenance program; $5,550 per ton for phase I of the RFG program; and $6,000 per ton for Tier I requirements, which is an EPA emission standard for light-duty vehicles. Officials in EPA’s Office of Mobile Sources consider these cost-effectiveness estimates to be inexact, but they consider the estimates to be the best figures that they could develop with the data available to them at that time. Some regions of the country that are not required to use RFG, but which still need to lower ozone levels, are considering whether to require RFG or gasoline with low vapor pressure. Generally, in the studies that we reviewed, low vapor pressure gasoline was not included as an alternative control measure, but according to refining industry officials, it has the potential to reduce volatile organic compounds (VOC) at a lower cost than RFG. In a February 17, 1994, memorandum to an official in one area considering this option, EPA stated that RFG offers a number of benefits, besides VOC reductions that are due in part to the low vapor pressure of RFG, that low vapor pressure gasoline does not, including the reduction of air toxics and nitrogen oxides (when RFG phase II becomes effective) as well as federal enforcement of the RFG program. EPA also stated that the lower cost of reduced volatility gasoline may be offset in whole or in part by lower competition in the reduced volatility gasoline market. We obtained the estimates used for the price of RFG from the four cost-effectiveness studies that we reviewed along with other organizations’ price estimates. The estimates varied but were all close to the range of the actual prices experienced during the first 14 months of the RFG program, which began in January 1995. The estimates varied from a low of 3.3 cents to 4.0 cents per gallon more for phase I RFG than the price of conventional gasoline (cited by DOE’s Office of Energy Efficiency and Alternative Fuels Policy) to a high of 8.1 cents to 13.7 cents more per gallon (cited by the American Petroleum Institute). EPA estimated that the price of RFG would be from 3.0 cents to 4.9 cents per gallon more than the price of conventional gasoline for phase I of the program. DOE’s Energy Information Administration (EIA) has monitored prices for both conventional gasoline and RFG since the program began in January 1995. In the early weeks of the program, retail prices for RFG were as much as 12 cents a gallon more than those for conventional gasoline. However, March 1996 data indicate that the average gap between RFG and conventional prices had narrowed to about 5 cents per gallon. Furthermore, according to EIA, the price difference may now be closer to 3 cents. (See app. III for additional information on the estimated RFG prices compared with the actual prices experienced.) EIA’s Annual Energy Outlook for 1996 and supporting documents contain the most current and comprehensive estimate we could find of the potential for using oxygenates to displace the petroleum used to produce gasoline. EIA data indicate that for all uses of oxygenates in gasoline, including the RFG program, about 384,000 barrels per day of oxygenates will be blended with gasoline in the year 2000 and about 394,000 barrels per day in 2010. These projections compare with about 309,000 barrels per day of oxygenates that EIA reports were used in 1995. Adjusting for the lower energy density of oxygenates, the projected level of oxygenate use will potentially displace about 305,000 barrels per day of petroleum used to produce gasoline in 2000 and about 311,000 barrels per day in 2010. (See app. IV for additional information on EIA’s projections, along with the energy densities and volume blending ratios of the various oxygenates.) It is important to note that the above petroleum displacement estimates do not account for differing amounts of petroleum that may be used in the production process for ethanol and the other types of oxygenates. The extent to which petroleum will be used to produce oxygenates depends on several variables and, therefore, is difficult to predict. The greater the amount of petroleum that is used to produce oxygenates, the less petroleum will be displaced. As such, our estimates are likely to be somewhat higher than the displacement that will be actually experienced. Furthermore, the displacement estimates do not include any possible increases or decreases in refinery outputs made possible by using oxygenates in the refining process. The use of oxygenates could allow some refineries to operate their reformers at lower temperatures, thus increasing the amount of gasoline produced. Doing so, however, may result in reductions in the other petroleum-based products produced, making the total petroleum displacement potential difficult to assess. According to DOE, EIA, and petroleum industry officials, any increase in the finished products related to lower reformer operating temperatures would vary on the basis of the different refinery configurations but, in total, would likely be relatively small. One EIA analysis concludes that, not counting the volume displacement discussed above, the amount of petroleum used in the refining process may actually increase when using oxygenates, but that the increase is not statistically significant. The 1992 Energy Policy Act requires the Secretary of Energy to determine the technical and economic feasibility of replacing 10 percent of projected motor fuel consumption with nonpetroleum alternative fuels by the year 2000 and 30 percent by 2010. Using the EIA’s projected oxygenate use discussed earlier and adjusting for energy density differences, oxygenates would displace about 3.7 percent of the 8.21 million barrels per day of the projected gasoline consumption in 2000 and about 3.6 percent of the 8.64 million barrels per day by 2010. In terms of meeting the act’s 10- percent and 30-percent petroleum replacement goals, this amount of displacement will account for about 37 percent of the motor fuel replacement goal for the year 2000 and about 12 percent of the 2010 goal. Your office also asked us to estimate the level of petroleum displacement if all gasoline sold was reformulated. EIA’s projections assume that about 35 percent of all gasoline will be reformulated and another 5 percent will contain some level of oxygenates for other purposes. Assuming the same percentage share for the different types of oxygenates, and other assumptions that EIA used in projecting future oxygenate consumption, we estimate that about 762,000 barrels per day of petroleum would be displaced in the year 2000 and 777,000 barrels per day in 2010, if all gasoline were reformulated. This would amount to about 9.3 percent of projected gasoline consumption in the year 2000 and about 9 percent in 2010. We did not assess the added costs or other implications of reformulating all gasoline. The transportation sector is currently about 97 percent dependent on petroleum-based fuels such as gasoline. According to DOE, this dependence contributes to our vulnerability to oil supply disruptions and related price shocks. DOE and USDA have a number of research projects under way to develop biofuels technologies as alternative transportation fuels. Most of the projects focus on reducing the costs of raw material feedstocks and of transforming the feedstocks into ethanol. Progress has been made in reducing the cost of ethanol, and additional cost reductions are projected in the future. If such reductions are achieved, DOE and USDA expect increased demand for biofuels. The primary focus of DOE’s biofuels program is to produce ethanol from low-cost, high-yield cellulosic feedstocks. These are dedicated energy crops, such as trees that can be grown in short-rotation time periods (3 to 10 years), grasses that can grow on marginal croplands, agricultural residues, and waste products. To a lesser extent, DOE is also conducting research into biofuels technologies to produce biodiesel. The feedstock production research is conducted at DOE’s Oak Ridge National Laboratory in Tennessee, where crops grown specifically for energy purposes are studied. Biofuels produced from waste products, such as municipal and industrial wastes, could potentially supply a small portion of transportation fuels in the near future. DOE’s National Renewable Energy Laboratory in Colorado conducts research on converting biomass feedstocks to competitively priced transportation fuels. Research activities include (1) pretreating biomass to facilitate its conversion to fermentable sugars, (2) improving enzyme technologies to convert cellulosic biomass into fermentable sugars, and (3) developing processes to rapidly ferment sugars from biomass materials to ethanol. According to the Director of DOE’s Biofuels System Division, the total DOE funding for the transportation biofuels program was about $26 million for fiscal year 1995. (App. V provides more detailed information on DOE’s and USDA’s biofuels research efforts and describes the process of converting corn and biomass to ethanol.) The vast majority of USDA’s biofuels research program is focused on developing corn starch as a feedstock for ethanol and, to a lesser extent, research to produce biodiesel from farm crops. A small component of USDA’s ethanol program is devoted to research on producing ethanol from cellulosic biomass, such as agricultural residues and the remaining portions of the corn plant, such as the cob, hull, stalks, and leaves. USDA’s research on conversion technologies focuses on enzyme research to convert feedstocks to fermentable sugars, fermentation improvements to increase ethanol yields, and other processes to minimize the cost of producing ethanol. According to the Director of USDA’s Office of Energy and New Uses, the total USDA biofuels research and development funding for fiscal year 1995 was about $10 million. According to DOE’s estimates, advances in research and development have reduced the estimated cost of producing ethanol from biomass energy crops in newly constructed plants from $5.32 per gallon in 1980 to the present estimate of $1.40 per gallon, measured in 1995 dollars, a reduction in real terms of about 74 percent. According to DOE, private companies, using proprietary technologies coupled with zero- or low-cost feedstocks and taking advantage of existing facilities to reduce capital costs, believe they can produce ethanol for 60 to 80 cents per gallon in certain applications. Based on further research in developing lower-cost feedstocks and in improving the process of converting biomass to ethanol, DOE’s goal is to produce ethanol at a cost of $0.67 per gallon by 2010, in current dollars. Oak Ridge National Laboratory researchers cautioned us, however, that reaching cost-reduction goals can depend on how much ethanol will need to be produced. For example, DOE has the objective of deploying technologies, by 2010, that could contribute to a national annual production capacity of 518 million barrels of petroleum-equivalent fuels in subsequent years. If that much ethanol were actually in market demand, it would require about 30 million to 50 million acres of land, depending on crop yields and conversion efficiency. As croplands are increasingly used to produce biomass, land costs could increase due to greater competition for land resources. Increasing land costs and other factors, such as regional biomass crop yield differences, could drive the cost higher than $0.67 per gallon. According to a 1993 USDA analysis and USDA officials, improvements in enzyme and production technologies have reduced the cost of producing a gallon of corn-based ethanol from about $2.50 in 1980, to less than $1.34 in 1992, measured in 1995 dollars, a reduction of about 46 percent in real terms. USDA officials told us that they could not estimate the current cost of producing ethanol because of fluctuations in the price of corn. The officials told us, however, that corn prices are substantially higher today than in 1992. USDA has not developed any cost-reduction goals for corn-based ethanol production. According to DOE, the two largest potential markets for biomass-derived fuels are ethanol used as an oxygenate in gasoline or as a fuel itself. While the potential oxygenate market discussed above is limited to blending relatively small percentages of ethanol with gasoline, ethanol used alone as an alternative motor fuel has the potential to replace much larger amounts of gasoline. The National Renewable Energy Laboratory estimates that by 2020 the demand for biomass ethanol could exceed 14 billion gallons per year. This amount consists of a demand of 3 billion gallons per year to be used as an oxygenate and 11 billion gallons per year for ethanol to be used as a replacement fuel for gasoline. This long-term projection is based on achieving a market price for ethanol that is predicted to be competitive with the price of gasoline. DOE’s Energy Efficiency and Renewable Energy Program Office also provided us with an estimate of transportation biofuels use, which shows an increasing use of biofuels from 126 million gallons in the year 2000 to 4.6 billion gallons and 10.8 billion gallons, respectively, in 2010 and 2020. While these estimates differ somewhat from the estimates provided by DOE’s laboratory, the differences reflect the uncertainties involved in making such projections. Both sets of estimates, however, predict growing use of biofuels, particularly beyond 2010 when such fuels are expected to be used as a replacement for gasoline. USDA has not projected ethanol demand on the basis of reductions in ethanol production costs. However, USDA’s 1993 analysis showed that further expansion of ethanol from corn is limited because of the high price of corn and the fact that corn has many alternative uses. According to the analysis, these restrictions do not apply to biomass feedstocks that could supplement corn as an inexpensive ethanol feedstock. According to DOE and USDA officials, many technical and economic barriers must be overcome to achieve a significant increase in the demand for biofuels. These barriers include limited funding for the successful development and commercialization of the biomass technologies discussed above, as well as achieving the cost-reduction goals mentioned earlier. We provided copies of a draft of this report to DOE, EPA, and USDA for their review and comment. DOE suggested several changes to clarify information in the report. We incorporated DOE’s comments where appropriate. Both EPA and USDA expressed concerns with our discussion in appendix III on the average price of RFG over the life of the RFG program compared to conventional gasoline. The agencies believe that the average price is misleading because it would reflect the very high price of RFG experienced at the start of the program. The officials also believe that the more recent price difference of about 3 cents to 5 cents per gallon is more accurate. We concur with these comments and deleted the reference to the average RFG price difference. EPA said that EIA’s projections for the future displacement of petroleum by the use of oxygenates seem higher than what it would expect. According to EPA, while it is encouraging states to use RFG where its use is now optional, it expects that the amount of petroleum displaced by the use of oxygenates in future years will be modest. The reasons cited by EPA were that the oxygenate requirements of the RFG program do not change over time, the number of areas participating in the RFG program has remained fairly stable, and the number of areas participating in the wintertime oxygenated fuels program have been decreasing as the program succeeds in bring areas into attainment for carbon monoxide. EIA projections show a 24.3- and 27.5-percent increase in oxygenate use in 2000 and 2010, respectively, over 1995 levels. According to EIA, these increases are based on several factors, including California’s recent statewide adoption of more severely reformulated gasoline requirements and projected increases in gasoline consumption, including RFG. In addition, the projections took into consideration the declining use of oxygenates in the wintertime oxygenated fuels program and do not include the expanded use of ethanol as an alternative fuel. Finally, EIA assumed a constant market share of about 35 percent for RFG throughout the forecast period. The above factors and assumptions used by EIA seem reasonable to us, but we agree that to the extent the projected increases in oxygenate use do not take place, the amount of petroleum displaced would be less. USDA said that from its perspective, our report does not sufficiently analyze the competing information contained in the RFG studies summarized in our report or critique the cost-effectiveness estimates that were examined. As stated earlier, our objective in this area was to summarize the results of studies on the cost-effectiveness of using reformulated gasoline compared to other measures to control automotive emissions. We state in the report that significant differences in the studies’ objectives, methodologies, time frames covered, costs considered, types and extent of pollutants considered, and other factors produced widely varying estimates of cost-effectiveness. A critique of the studies’ results or comparing the results on an equal basis may be useful but would require redoing the studies, controlling for each of the factors cited above. Such an analysis was beyond the scope of our review. Appendices VI, VII, and VIII contain DOE’s, EPA’s, and USDA’s comments, respectively, along with our responses where appropriate. App. IX describes the objectives, scope, and methodology. We performed our work from July 1995 through April 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this report. At that time, we will send copies of this report to interested congressional committees, the Secretary of Energy, the Secretary of Agriculture, and the Administrator of EPA. We will also make copies available to others upon request. Please call me at (202) 512-3841 if you have any questions. Major contributors to this report are listed in appendix X. This appendix summarizes the results of a 1995 study performed by the Department of Energy’s (DOE) Argonne National Laboratory, which evaluated, among other things, the greenhouse gas emission characteristics of reformulated gasoline (RFG). This is the most current and comprehensive study that we could find on this issue. The study indicates that RFG’s potential to reduce greenhouse gases is small. According to the study, the effects of using RFG on greenhouse gas emissions varies according to (1) the specific oxygenate that is added to conventional gasoline and (2) the time of year that RFG is used. According to one of the study’s authors, the time of year is a factor because of the volatile organic compound (VOC) reduction requirements for high ozone season (summer) RFG. Table I.1 shows the comparative carbon dioxide equivalent emissions, a common measure of greenhouse gases, of RFG made with ethyl tertiary butyl ether (ETBE), an ether made from ethanol;methyl tertiary butyl ether (MTBE), an ether made from methanol; conventional gasoline; and RFG made with ETBE, derived from ethanol produced with new or additional rather than existing agricultural sources. Carbon Dioxide Equivalent Emissions (grams) Reformulated gasoline(existing) Reformulated gasoline (new) The table shows that in the summer when ozone problems are most severe, ETBE made with existing sources of ethanol produces the least amount of greenhouse gases; while ETBE from new sources of ethanol emits the highest amount of greenhouse gases. Emissions of greenhouse gases from conventional gasoline are the second lowest, followed by emissions from RFG made with MTBE. In all cases, however, as discussed above, the difference in greenhouse gas emissions between RFG and conventional gasoline is small. Nearly all ethanol is currently made with corn. According to the Department of Agriculture, current research on using biomass feedstocks to produce ethanol, combined with improved production processes, may lead to greater reductions of greenhouse gases for RFG made with ethanol. However, a DOE official noted that while ethanol made with biomass can significantly reduce the amount of greenhouse gas emissions compared with corn-based ethanol, all oxygenates comprise only a small part of the RFG mixture. Hence, unless the use of RFG becomes more widespread, and specifically RFG made with ethanol derived from biomass, the potential for large greenhouse gas reductions appears limited. The Environmental Protection Agency (EPA), the American Petroleum Institute (API), Radian Corporation, and Sierra Research, Inc., in conjunction with Charles River Associates, conducted studies of the cost-effectiveness of RFG compared to other automotive emission control measures. A list of the studies follows. “Final Regulatory Impact Analysis for Reformulated Gasoline,” EPA (Dec. 1993). “The Cost Effectiveness of VOC and NOx Emission Control Measures,” Publication No. 326, API (Sept. 1994). “Emission Reductions and Costs of Mobile Source Controls,” DCN92-221-054-01, Radian Corporation (Dec. 1992). “The Cost-Effectiveness of Further Regulating Mobile Source Emissions,” SR94-02-04, Sierra Research, Inc., and Charles River Associates (Feb. 1994). Tables II.1-II.5 and accompanying narrative contain the results of the cost-effectiveness analyses made by the various organizations that we reviewed. The costs indicated are expressed in dollars per ton of volatile organic compounds (VOC), nitrogen oxide (NOx), or air toxics removed. Significant differences in the analyses’ objectives, methodologies, time frames, costs considered, and other factors produced varying estimates of costs per ton of pollutant removed. Also, each of the analyses evaluated somewhat different control measures, making comparisons among the studies very difficult. An API analyst reported on various estimates of the cost-effectiveness of emission control strategies and found several problems that make comparison among the studies’ results very difficult. The analyst found that cost-effectiveness is dependent on several factors, including the baseline emission level, whether cost-effectiveness is calculated on a marginal or total cost-effectiveness basis, the assignment of control costs for different emission reductions, the extent of emission reductions in attainment areas, and the seasonality of ozone pollution, which would vary from locality to locality. Table II.1 contains cost comparisons, which are drawn from EPA’s 1993 Regulatory Impact Analysis for the RFG program. Some of the costs reflected in the table are the total costs of implementing some control measures and others are the incremental costs—the additional costs—incurred to implement control measures with more stringent requirements that are added to earlier measures. For example, the costs reflected for phase I of the federal RFG program are the total costs of that measure. Whereas, phase II of the RFG program reflects the incremental cost of implementing more stringent requirements in addition to phase I of the program. The glossary at the end of this report defines the control measures identified in this table and subsequent tables, as well as other terms that are contained in this report. Stricter emission standards for light-duty vehicles (tier I) EPA officials told us that because the Clean Air Act Amendments of 1990 mandated the RFG program, the regulatory impact analysis focused on the cost differences of various RFG formulas and, therefore, contained only limited information comparing RFG with other control measures. Even this focus was constrained somewhat because the legislation specified that oxygen must make up a minimum of 2 percent of the RFG’s total weight. EPA also estimated the cost of RFG phase II in removing NOx at about $3,700 per ton and the cost of removing air toxics at about $40,000 per ton for RFG phase I. EPA has recognized the limitations of the cost-effectiveness information for RFG and specifically the need for additional information that compares the costs of the RFG program with other control measures. According to an official in EPA’s Office of Mobile Sources, the cost figures used in the regulatory impact analysis are the best available from EPA. Furthermore, EPA officials said that comparative data are not readily available for most of the other control measures because the purposes of these programs are not the same as the RFG program, especially with regard to reducing NOx and air toxic emissions. RFG phase I is ranked fifth of the six control measures listed in table II.1. Table II.2 summarizes the results of API’s analysis of the cost-effectiveness of the RFG program in reducing VOC and NOx emissions in five cities. The analysis was prepared for API by Radian Corporation. Washington, D.C. The study found that there were major differences among the cost-effectiveness of RFG among the five cities. In some cities, RFG is up to three times more cost-effective than in other cities. The data take into consideration the vapor pressure of gasoline sold in these cities and other factors, such as the length of the ozone season that varies by city. The study indicates that a primary reason for the RFG cost-effectiveness differences was the vapor pressure of the gasoline used in those cities. The data show that the lower costs for VOC reductions are in the cities that use gasoline with higher vapor pressures. Table II.2 contains values for the years 1995 through 2004 and, therefore, includes cost figures for NOx control that is part of the phase II RFG program. Table II.3 summarizes comparisons of mid-range cost estimates by API for RFG in the five cities reviewed with other control measures for VOC and NOx. These figures also reflect estimates for the years 1995 through 2004. The table shows that RFG is ranked second out of the eight control measures studied for VOC. Refueling vapor recovery equipment (stage II) Reformulated gasoline (phases I and II) Enhanced automobile emission inspection and maintenance program Expanded automobile emission inspection and maintenance program Use of natural gas-fueled vehicles California’s stricter reformulated gasoline California’s low emission vehicle requirements Data not available. Table II.4 summarizes Radian Corporation’s study of the emission reductions, costs, and cost-effectiveness of different mobile source control strategies. The study was prepared for the Virginia Petroleum Council, for the Virginia State Legislature’s use in determining which air pollution control measures to adopt in Northern Virginia. The table shows that RFG is ranked seventh out of the eight control measures. Refueling vapor recovery equipment (stage II) Enhanced automobile emission inspection and maintenance program Maximum automobile emission inspection and maintenance program (with tier II) Maximum automobile emission inspection and maintenance program (with tier I) Reformulated gasoline (phases I and II) Sierra Research, Inc., and Charles River Associates’ study estimated the cost-effectiveness of mobile source emissions control measures required by the Clean Air Act Amendments of 1990 and the California Air Resources Board regulations. The study was prepared for the American Automobile Manufacturers Association. Table II.5 summarizes the results of the key control measures identified in the study. RFG is ranked fourth out of the 14 mobile source control measures. Enhanced automobile emission inspection and maintenance program Refueling vapor recovery equipment (stage II) California phase II reformulated gasoline New evaporative standards and test procedures to control vehicle emissions Stricter emissions standards for light-duty vehicles (tier I) Transitional low emissions vehicle program Stricter emissions standards for light-duty vehicles (tier II) This appendix compares the price estimates used for RFG in the four cost-effectiveness studies that we reviewed, along with the price estimates of other organizations, with the actual RFG prices reported by DOE’s Energy Information Administration (EIA). Sierra Research, Inc., and Charles River Associates New York State Energy Research and Development Authority Data were unavailable for phase II of the RFG program. EIA has monitored prices of both conventional gasoline and RFG since the RFG program began. Figure III.1 shows EIA data on actual retail prices from the beginning of the RFG program in January 1995 through the week of March 18, 1996. (12.1 difference) (5.1 difference) Weeks (1/3/95 - 3/18/96) The EIA data show that in the early weeks of the program, average retail prices for RFG were as much as 12 cents a gallon more than those for conventional gasoline. However, more recent data indicate that the average gap between RFG and conventional gasoline prices had narrowed to about 5 cents per gallon. Furthermore, according to EIA, the price difference may now be closer to 3 cents. This appendix discusses the potential petroleum displacement from using oxygenated fuels, identifies some of EIA’s assumptions used in its Annual Energy Outlook for 1996 to forecast gasoline and oxygenate consumption, and provides information on the volume and energy density of oxygenates blended with gasoline. EIA used several assumptions in forecasting gasoline and oxygenate consumption to 2015. Some of the key assumptions are described as follows: EIA assumes that the tax exemption of $0.54 per gallon of ethanol will continue past the year 2000 to 2015. The subsidy is in nominal terms. EIA models the production and distribution of four different types of gasoline: traditional, oxygenated, reformulated, and reformulated/high oxygen. RFG is assumed to account for about 35 percent of annual gasoline sales throughout the forecast. The total estimated market for all oxygenated fuels, including RFG and traditional gasoline that may contain some oxygenates, is about 40 percent throughout the forecast. Oxygenated gasoline, which has been required during winter months in many U.S. cities to control carbon monoxide emissions, requires an oxygen content of 2.7 percent by weight. Reformulated/high oxygen gasoline, used in overlapping areas that require oxygenated gasoline and RFG, requires 2.7 percent oxygen. RFG requires 2.0 percent oxygen by weight. EIA assumes that RFG will be certified in accordance with the EPA models. Only ethanol made from corn is currently modeled. About 95 percent of the U.S. production of fuel ethanol is derived from corn. The Energy Policy Act of 1992 mandates that government, business, and fuel providers purchase a specified percentage of alternative-fueled vehicles in their fleets. EIA assumed that both business and fuel-provider fleet mandates do not take effect until the year 2000. (Footnote “b” in table IV.1 shows that some ethanol will be used in E85, an alternative motor fuel, in 2010.) Percent of oxygen requirement by weightPrior to March 18, 1996, EPA’s RFG fuel regulations did not allow oxygenates to be blended above 2.7 percent oxygen by weight during the summer high ozone season. EPA revised these fuel regulations effective March 18, 1996, allowing higher concentrations of oxygenates under certain circumstances. EPA does not expect significantly higher use of oxygenates as a result of this change. DOE and the Department of Agriculture (USDA) have several research projects to develop biofuels technologies from renewable resources for the transportation fuel market. This appendix provides additional information on the agencies’ efforts. The appendix also shows the processes for converting corn and biomass to ethanol. Since the ethanol supply is limited due in part to the high cost of corn feedstocks and the use of corn for other purposes, DOE’s biofuels research program is aimed at developing biomass-based transportation fuels from cellulosic feedstocks. Such feedstocks are derived from renewable resources such as grasses, trees, and waste products. DOE is also conducting research to convert these feedstocks to liquid transportation fuels. DOE’s program envisions that such fuels have the potential to displace a large percentage of petroleum-based transportation fuels in the future. The following summary outlines the focus of DOE’s biofuels research efforts. To lower the cost of cellulosic feedstocks, the Oak Ridge National Laboratory leads a research and analysis program with many collaborators nationwide to identify and develop plants that can be used as high-yield dedicated energy crops on excess cropland; develop specialized site management, crop management, harvest and handling techniques to obtain optimum yields from plants with high-yield potential; identify crop production techniques that ensure the protection of the environment and natural resources; identify locations where high-yields can be achieved on low cost land; and obtain cost, risk, and environmental data under operational conditions by collaborating with private industry, USDA, and local organizations to demonstrate crop production systems. To lower feedstock conversion costs, the National Renewable Energy Laboratory is conducting biofuels research to demonstrate a process to convert 1 ton per day of cellulosic waste feedstock to produce 100 gallons of ethanol in cooperation with industrial partners; demonstrate a process of using the cellulosic fiber of the corn kernel to improve yields; develop and evaluate a new process that combines two main biomass sugar fermentation steps into one, to decrease the production time and increase yields; develop new cellulase enzymes that more economically degrade cellulose determine the potential to produce ethanol from switchgrasses, sugarcane, tropical grasses, trees, paper and sawmill wastes, forestry residues, and rice straw; and develop new technologies to produce biodiesel from waste fats and oils. The cost of producing ethanol from corn depends on several factors, including the price of corn, the value of co-products, the cost of energy and enzymes, the size of the production plants, and the level of technology in the plant. USDA’s efforts have largely focused on improving technologies that would increase the efficiencies of feedstocks (primarily corn), speed up the production process, and raise the yield of ethanol in order to reduce its overall cost. USDA conducts or funds biofuels research on the projects summarized below. To lower the cost of feedstocks, USDA research is conducted on starches, such as corn, wheat, sorghum, and potatoes; fruit and vegetable by-products; corn cobs, straws, and corn hulls; corn stover and grasses; potential energy crops such as trees (e.g., evaluate the energy yield from short rotation of different types of woods); and agricultural residues. To lower feedstock conversion costs, USDA research is conducted on organisms that can produce ethanol from various feedstocks through biomass conversion processes to convert feedstocks to fermentable sugars through more efficient and cost-effective use of enzymes; processes to increase the yield of ethanol and other co-products, such as advanced fermentation technologies to more efficiently and cost effectively produce ethanol. Two primary methods are used to make ethanol from corn: dry milling and wet milling. Dry milling, used for about one-third of ethanol production, is used to produce mainly ethanol, while wet milling generates ethanol and a variety of co-products, such as corn oil, animal feed, and other starch products. Figure V.1 illustrates the process used to convert corn into ethanol. This step consists of soaking corn to separate it into its components (oil, protein, fiber, solubles, and starch) DOE’s biofuels research focuses on developing biomass-based transportation fuels from cellulosic feedstocks. Figure V.2 illustrates the process used to convert biomass feedstocks into ethanol. The following are GAO’s comments on the Environmental Protection Agency’s letter dated May 17, 1996. 1. We agreed with this comment and have revised the report. 2. We agreed with this comment and have revised the report. 3. Our report refers to the use of oxygenated fuels to reduce carbon monoxide emissions. We revised the report to reflect EPA’s comment that the number of areas participating in the oxygenated fuels program have been reduced. 4. We agreed with this comment and have revised the report. 5. We agreed with this comment and have revised the report. 6. According to EPA’s regulatory impact analysis and discussions with EPA officials, $5,550 reflects the total cost of phase I of the RFG program. We added EPA’s views on the costs of reducing VOCs to our report. 7. We agreed with this comment and have revised the report. 8. We revised the report to more clearly reflect EPA’s position stated in its memorandum. 9. We agreed with this comment and have revised the report. (This comment relates to comment 17.) 10. We agreed with this comment and have revised the report. 11. The assumptions used for EIA’s projected oxygenate use is explained in the agency comments section of this report. EIA’s projections of oxygenate use do not include the future use of ethanol as an alternative fuel. 12. We said in our report that the petroleum displacement estimates do not account for differing amounts of petroleum that may be used in the production of ethanol and other types of oxygenates. We also said that the extent to which petroleum will be used to produce oxygenates depends on several variables and, therefore, is difficult to predict. According to EIA officials, factors affecting the extent of petroleum use to produce oxygenates include the type of oxygenate and different assumptions about the source of raw materials and the energy used to produce the oxygenates and the vapor pressure of the blended fuel. We also pointed out that the greater amount of petroleum that is used to produce oxygenates, the less petroleum will be displaced. More detailed information on the extent of petroleum used to produce oxygenates can be found in the Argonne National Laboratory’s April 1995 report referred to in appendix I. 13. We agreed with this comment and have revised the report. 14. We did not omit the greenhouse gas emissions associated with RFG produced with ethanol, as indicated by EPA. The table shows RFG with existing and new sources of ethanol as stated in notes b and c. 15. We agreed with this comment and have revised the report. 16. We agreed with this comment and have revised the report. 17. We agreed with this comment and have revised the report to explain EPA’s RFG estimates. 18. We agreed with this comment and have revised the report. 19. We agree that EPA’s phase II RFG requirements are likely to increase the use of ETBE due to the more stringent VOC emissions reduction requirements. The increase in ETBE use did not show up in the year 2000 because the lowest amount of oxygenate usage reflected was 1,000 barrels per day. However, EIA’s forecast of oxygenate use to the year 2015 shows that ETBE usage increases after the year 2000. In fact, the table shows that 28,000 barrels per day of ETBE is predicted to be used in 2010. 20. See comment 12 above, which relates to this issue. We revised the note to table IV.1 to reflect that petroleum displacement would be lower given the extent of petroleum used to produce the oxygenates, as previously stated in the letter, and referred the reader to the Argonne National Laboratory report for further information on this issue. 21. We agreed with this comment and have revised the report. 22. We agreed with this comment and have revised the report. 23. We agreed with this comment and have revised the report. The following are GAO’s comments on the Department of Agriculture’s letter dated May 16, 1996. 1. We agreed with this comment and have revised the report. 2. We agreed with this comment and have revised the report. 3. The cost-effectiveness studies that we reviewed use VOC reductions as a proxy for ozone reductions. We state in our report that VOCs and NOx emissions are two of the more prevalent pollutants emitted by automobiles and are precursors to ozone pollution. We recognize in the background and other sections of the report that RFG helps to reduce VOC, NOx, and air toxics emissions. 4. We state in the referenced paragraph that RFG offers a number of benefits that low vapor pressure gasoline does not, including the reduction of air toxics and nitrogen oxides. We have revised this paragraph to make it clear that these benefits are in addition to VOC reductions, which are due in part to the lower vapor pressure of RFG. 5. This comment also responds to USDA’s comment 15. Our report does not indicate that API believes that low vapor pressure gasoline is a cheap ozone control measure or that lowering the vapor pressure represents a major cost. In the text following table II.2 that USDA refers to, we point out that in cities that already use a low vapor pressure gasoline, the cost-effectiveness of adding a RFG requirement is higher. This is because some of the benefits of RFG was already obtained by using the low vapor pressure gasoline. 6. We agreed with this comment and have revised the report. 7. In this section, we gave the range of the price estimates for RFG compared to conventional gasoline prices—the low estimate cited by DOE and the high estimate cited by API. Appendix III.1 cites some of the reasons for the API higher price estimates. While API’s estimate is in the high end of the range of estimates, it is largely within the range of prices actually experienced during the initial months of the RFG program. We agree, however, that to the extent API’s estimated costs are higher than the actual costs experienced, its estimated costs to reduce pollutants would also be higher than actual. 8. We agreed with this comment and have revised the report. 9. While additional estimates of the cost-effectiveness of reformulated gasoline have been reported, and other estimates can be calculated, our objective was to identify and present cost-effectiveness data contained in major federal and other studies. Therefore, we made no change to the report. 10. We discussed this issue in detail with representatives from DOE and industry and concluded that varying industry practices make it difficult to assess the amount of petroleum used to produce oxygenates. As such, the displacement numbers presented likely represent the most petroleum displacement that can be expected. We revised the report to make this point clearer. 11. As our report indicates, the use of oxygenates could allow some refineries to operate their reformers at lower temperature, thus increasing the amount of gasoline produced. We also point out, however, that DOE, EIA, and industry officials believe that any such increases industrywide are likely to be relatively small. 12. Addressing potential price changes of crude oil and gasoline resulting from the displacement of crude oil by oxygenates was beyond the scope of our review. While there may have been some downward pressure on crude oil prices resulting from less demand as oxygenates were introduced, the overall impact on gasoline prices has been an increase in price as discussed in our report. 13. According to the author of DOE’s Argonne National Laboratory study containing the information in question, USDA is incorrect in its position that renewable fuels such as ethanol necessarily emit fewer greenhouse gases than conventional gasoline. The author pointed out that there are differing opinions regarding the amount of energy required to produce ethanol and that USDA’s estimation is lower than that of EPA and DOE. According to the author, USDA’s estimation of greenhouse gas emissions by reformulated gasoline neglect to account for a number of sources of carbon dioxide equivalent emissions resulting from the production and transport of the fuel. For instance, carbon dioxide emissions result from oil used by farming equipment, oil used to transport corn to ethanol plants, the production of fertilizer, and the burning of coal used in producing ethanol in processing plants. 14. Our report focused on the results of cost-effectiveness analyses done by EPA, API, Radian Corporation, and Sierra Research. We recognize in our report that a number of variables can affect the benefits and cost-effectiveness of the different measures for controlling VOCs and other air pollutants. We also point out that the costs and benefits across these studies are not measured uniformly, making it difficult to make comparisons among the control measures. However, the objective of our work was not to conduct our own analysis of the control measures, controlling for all the factors that may affect the results. We also discussed this issue in the agency comments section of our report. 15. See our response to comment 5. 16. The API study did not address whether the NOx cost estimates affect the winter particulate matter benefits associated with NOx controls. 17. The API study measured all VOC and NOx reductions in percentages rather than tons of reduction. 18. The API study did not indicate whether modernization costs were included as part of the cost estimates. 19. See our response to comment 7. 20. We agreed with this comment and have revised the report. The objectives of our review were to (1) summarize the results of federal and other studies on the cost-effectiveness of using RFG compared to other automotive emission control measures and compare estimates of the price of RFG used in such studies with more recent actual experience; (2) summarize the results of studies estimating the potential for oxygenates to reduce the use of petroleum; and (3) summarize the ongoing federal research into biofuels, including any related past or projected cost reduction goals, and any increased demand estimates based on such goals. To identify studies on the cost-effectiveness of using RFG compared to other automotive emission control measures, we interviewed officials from EPA, DOE, USDA, the petroleum industry, associations representing the petroleum, oxygenated fuels, and renewable fuels industries, state and local government agencies, and others. Several organizations have conducted cost-effectiveness studies of air quality control measures. We examined those studies that (1) reviewed the cost-effectiveness of RFG as well as other mobile source control measures and (2) contained original analyses. The four studies listed in appendix II were the only studies we found that met these criteria. To compare estimates of the price of RFG used in such studies with more recent actual price experience, we used the price estimates used in the studies and obtained actual RFG prices reported by DOE’s EIA. To determine what estimates were available on the potential petroleum displacement through the use of oxygenates in gasoline, we interviewed officials from DOE, the refinery industry, and associations representing the oil and oxygenated fuels industries. Through these sources, we learned that DOE had the most comprehensive effort underway that would provide an estimate of the petroleum displacement potential by using oxygenated fuels. Accordingly, we obtained information on the use of oxygenates and its petroleum displacement potential from EIA and DOE’s Office of Energy Efficiency and Alternative Fuels Policy. Because the Office had undertaken a study of the potential for replacement fuels to displace petroleum fuels by the years 2000 and 2010, we used those 2 years to show the estimated oil displacement from using oxygenated fuels. We agreed with your office to identify any studies on the costs and benefits of using oxygenates versus aromatics as octane enhancers in gasoline and whether refiners were making appropriate cost comparisons between the use of oxygenates and aromatics. During this assignment, we informed your office that we had not been able to identify any such studies. According to the DOE officials we talked with, the petroleum refining industry and associations representing the petroleum industry, the costs and benefits of using oxygenates versus aromatics would vary greatly from refinery to refinery and are dependent on the economic and plant-capacity factors of each refinery. This makes it difficult to generalize about the appropriateness of refining decisions on using oxygenates or aromatics. Most of the officials we talked with, however, believed that refiners would act in their own economic interest in making this decision. We agreed with your office that no further work was needed on this issue. To identify major federal research on biofuels, including any related production cost-reduction goals and the estimated use of biofuels based on such goals, we interviewed officials at DOE, USDA, representatives of the biofuels industry, and universities conducting biofuels research. We also met with officials at the Office of Technology Policy, Executive Office of the President; attended conferences related to biofuels; conducted literature searches; and reviewed and analyzed several reports and documents on biofuels. In addition, we interviewed officials at DOE’s Oak Ridge National Laboratory and National Renewable Energy Laboratory, where DOE’s most extensive biofuels research is conducted. We obtained information on past and projected cost-reduction goals achieved through biofuels research and development from officials at Oak Ridge National Laboratory, the National Renewable Energy Laboratory, DOE, and USDA. To identify the potential increased demand for biofuels, based on cost-reduction achievements, projections and goals, we obtained estimates on the demand for biofuels from DOE’s National Renewable Energy Laboratory. We did not evaluate the methodology and assumptions the National Renewable Energy Laboratory used to arrive at the demand estimates cited in this report. A class of high-octane hydrocarbons that constitute a certain percentage of gasoline. The chief aromatics in gasoline are benzene, toluene, and xylene. In addition to concerns about the toxicity of benzene, some aromatics are highly reactive chemically, making it likely that they are active in ozone formation. Biodiesel is a biofuel made from animal and vegetable derived oils that can be used as a substitute or additive to diesel fuel. According to EPA, the use of biodiesel may increase some types of emissions but reduce others. Biofuels are alcohols, such as ethanol or other chemicals, derived from biomass or living matter. Current research is focused on developing biofuels from the starch in corn kernels or from the fibrous cellulosic materials in the rest of the corn plant; it also focuses on cellulosic plants, such as fast-growing trees or grasses, and waste products such as agricultural and forestry residues and municipal and industrial wastes. This program, starting in 1998, will require certain fleets (in certain nonattainment areas) of 10 or more vehicles, which can be centrally fueled, to meet clean-fuel vehicle volatile organic compounds (VOC) and nitrogen oxides (NOx) emissions standards. These standards can be met through the use of alternative fuels such as compressed natural gas or through the use of reformulated gasoline (RFG). More stringent vehicle emission testing and repair program that is required to be implemented in areas in the United States with more serious air pollution problems. An automobile emission inspection and maintenance program that requires testing more vehicles than required by EPA. An alcohol produced from starch or sugar crops, such as corn or sugar cane, or from cellulosic biomass materials. Ethanol may be used as a fuel by itself (an alternative motor fuel) or blended into gasoline to increase the octane of gasoline and increase the gasoline supply. In the United States, ethanol has been largely blended in a 10-percent mixture with gasoline to form gasohol. As an oxygenate, ethanol supplies oxygen to gasoline, which reduces carbon monoxide emissions from vehicles. Because ethanol is water soluble, it must be blended into gasoline outside the refinery and it cannot be transported in the same pipelines with gasoline. In addition, ethanol increases the volatility of gasoline thereby increasing evaporative emissions. These drawbacks can be overcome if ethanol is converted to its ether form, ethyl tertiary butyl ether. An ether compound made using ethanol, which is used as a gasoline additive to boost octane and provide oxygen. Since ETBE has low vapor pressure, it could be useful in helping to comply with volatility controls on gasoline. Unlike alcohols, ETBE could be produced and blended with gasoline at the refinery and shipped in gasoline pipelines. Gases, including carbon dioxide, water vapor, methane, nitrous oxide, and chlorofluorocarbons, that when emitted into the atmosphere threatens to change the earth’s climate. A California program that prescribes the maximum emissions permitted from new vehicles sold in that state. More stringent automobile emission testing and repair program, which assumes that automobiles will meet appropriate emission standards over their useful life. An ether compound made using methanol, which is used as a gasoline additive to boost octane and provide oxygen to help reduce carbon monoxide emissions. MTBE is the most widely used oxygenate in RFG. Unlike alcohols, because MTBE could be produced and blended with gasoline at the refinery and shipped in gasoline pipelines, it is the most widely used oxygenate. New standards and test procedures that EPA is required to promulgate to control vehicle emissions under summertime, ozone conditions. Technology on vehicles that allows an on-board computer to detect and record malfunctions in the emission control system, allowing more effective repair of vehicles with high VOC and NOx emissions. The term applies to any gasoline additive containing oxygen. Oxygen in gasoline helps to reduce carbon monoxide, VOC, and air toxics emissions from vehicles. Oxygenates include alcohols, such as ethanol, and ethers, such as ETBE and MTBE. Each of these compounds also enhances the octane of gasoline, while their effects on volatility vary. Reforming is one refining process in which crude oil is converted into gasoline and other products. Gasoline whose composition has been changed through fuel reformulation. The Clean Air Amendments of 1990 requires certain fuel specifications and performance standards that RFG must meet to reduce air toxic and ozone-forming emissions in specified nonattainment areas. These areas are to start using RFG in January 1995 and in the year 2000, phase II RFG must be used, which further reduces VOCs, NOx, and air toxic emissions. California RFG requirements are stricter than the federal RFG requirements. This is a control measure for capturing the emissions of gasoline vapor during vehicle refueling and returning them to the storage tanks at service stations. A control measure of gasoline volatility. Vapor pressure is expressed as pounds per square inch (psi) with higher pressure resulting in higher volatility of gasoline. An ether compound made using methanol, which is used as a gasoline additive to boost octane and provide oxygen. Since it has low vapor pressure, TAME could also be useful in helping to comply with volatility controls on gasoline. National VOC, NOx, and carbon monoxide emission standards that light-duty vehicles are required to meet. Standards for certain light-duty vehicles and light-duty trucks to further reduce emissions. These standards would be more stringent national emissions standards that the federal government has the option of mandating beginning in model-year 2004. A program that requires a portion of the California vehicle population to meet approximately 50 percent lower VOC emissions than the national VOC standards. A program that further lowers VOC emissions for the California vehicle population beyond that required in the transitional low-emission vehicle program. This program accelerates the removal of older vehicles from the fleet that have high mobile source emissions. VOC and NOx emissions are two of the more prevalent pollutants that are emitted by motor vehicles and are precursors to the formation of ozone. A California program that requires that by 2003, 10 percent of vehicles marketed in that state must be zero emission vehicles. Currently, the electric vehicle produces essentially no pollution from the vehicle’s tail pipe or through fuel evaporation. Several other states have adopted zero emission vehicle requirements. Gasohol: Federal Agencies’ Use of Gasohol Limited by High Prices and Other Factors (GAO/RCED-95-41, Dec. 13, 1994). Energy Policy: Options to Reduce Environmental and Other Costs of Gasoline Consumption (GAO/RCED-92-260, Sept. 17, 1992). Air Pollution: Oxygenated Fuels Help Reduce Carbon Monoxide (GAO/RCED-91-176, Aug. 13, 1991). Alcohol Fuels: Impacts From Increased Use of Ethanol Blended Fuels (GAO/RCED-90-156, July 16, 1990). Gasoline Marketing: Uncertainties Surround Reformulated Gasoline as a Motor Fuel (GAO/RCED-90-153, June 14, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the cost-effectiveness of reformulated gasoline (RFG), focusing on: (1) the potential for oxygenates to reduce petroleum use; and (2) ongoing federal biofuel research. GAO found that: (1) RFG is more cost-effective than some automotive emission control measures; (2) the extent and nature of air pollution in any specific area determines whether certain pollution control measures are used individually or in combination with other control measures; (3) about 305,000 barrels of petroleum per day are at risk for displacement by the year 2000; (4) this displacement amounts to nearly 3.7 percent of the estimated gasoline consumption for year 2000 and 3.6 percent for 2010; (5) the Department of Energy is focusing its efforts on reducing the cost of growing and converting biomass feedstocks into ethanol, and the Department of Agriculture is focusing on reducing the cost of growing and converting agricultural feedstock into ethanol; (6) advances in biofuels research has reduced the cost of producing ethanol from biomass crops; (7) further cost reductions in producing corn-based ethanol, and the subsequent demand for it, may be constrained by the price of corn and its many uses; and (8) the demand for ethanol will increase assuming the successful development and commercialization of biofuels technology.
In 2001, we reported that the UN headquarters complex in New York City—built largely between 1949 to 1952—no longer conformed to current safety, fire, and building codes or to UN technology and security requirements. The UN General Assembly noted that conditions in the UN headquarters complex posed serious risks to the health and safety of staff, visitors, and tourists. Thus, in December 2006, after several years of design and planning, the UN General Assembly unanimously approved the CMP to renovate the UN headquarters complex, at a budget not to exceed $1.88 billion. To finance the CMP, the UN General Assembly approved a strategy to assess member states for the cost of the CMP, under which they could choose to pay their assessment in either a lump sum or over a 5-year period, from 2007 to 2011. CMP assessments, whether collected as lump-sum or multi-year payments, were invested to earn interest income. The UN General Assembly also approved a $45 million working capital reserve to cover any temporary cash flow deficits. According to the CMP office, member states would receive this reserve back in the form of a credit at the end of the project’s construction phase. The United States chose to pay its assessment for the CMP in five equal payments of $75.5 million per year starting in 2007, for a total of approximately $378 million. The United States also paid a separate assessment to the project’s working capital reserve of about $9.9 million in 2007. In the resolution approving the CMP, the UN General Assembly decided that, in the event of cost escalations over the approved budget of $1.88 billion, member states would be subject to a further assessment to meet the revised requirements of the CMP. The UN General Assembly approved the completion of the CMP’s scope during the scheduled period of 2006 to 2014. This scope included the renovation of five buildings on the UN headquarters complex—the General Assembly Building, the Conference Building, the Secretariat Building, the Library, and the South Annex—as well as renovation of the basements connecting several of those buildings and the construction of a temporary conference building on the North Lawn of the complex. Figure 1 shows the existing buildings of the UN headquarters complex, along with the temporary conference building. To house UN staff during the renovation, the CMP included plans to lease swing space in nearby buildings. Additionally, the CMP included landscaping, demolition of the temporary conference building, additional blast protection, measures to promote environmental sustainability, and improvements to the reliability and redundancy of headquarters systems such as emergency power. In several resolutions, the UN General Assembly noted that it has the sole prerogative to decide on any changes to the CMP’s scope, budget, and implementation strategy. Since December 2006, the UN General Assembly has exercised this prerogative to make changes to the CMP or authorize changes proposed by the Secretary-General. These changes include: Accelerated Strategy IV: In December 2007, the UN General Assembly approved an expedited strategy for the CMP known as accelerated strategy IV. This approach approved the renovation to proceed in two concurrent phases: one to renovate the Secretariat Building and one to renovate the Conference Building, General Assembly Building, and other buildings. Under the previous approach, the UN had planned on renovating buildings in multiple phases, including renovating the Secretariat Building while it was 75 percent occupied. The accelerated strategy called for the temporary relocation of most of the staff of the Secretariat Building during the renovation— which required the CMP office to increase the amount of leased swing space—and expedited the schedule for the Secretariat Building’s renovation by reducing construction time from 6 to 3 years. The strategy also affected the schedules for the construction of the temporary conference building, as well as the renovation of the Conference Building and General Assembly Building. The CMP office reported that such an implementation strategy would reduce risks associated with the CMP. The CMP office also estimated that the strategy would produce an estimated cost overrun of $190 million, which it would seek to reduce through the process of value engineering. Associated Costs: In April 2009, the UN General Assembly decided that certain costs related to the CMP—known as associated costs— would be financed from within the $1.88 billion CMP budget. Associated costs cover a wide range of requirements, such as broadcast equipment, new furniture, and additional staffing requirements to manage information technology and security. According to CMP officials, these costs were originally expected to be funded by UN program offices through the regular UN budget process. Therefore, the CMP office’s original cost estimates for the CMP did not include new furniture or equipment except where the equipment was part of the permanent infrastructure of the UN. For instance, according to the CMP office, the original CMP scope only provided for furniture for three new mid-sized conference rooms and supplemental office furniture associated with swing spaces. While associated costs are funded from within the CMP budget, UN departments other than the CMP office manage these costs. For example, the UN Department of Safety and Security manages costs related to security. Prior to the UN General Assembly’s decision on associated costs, the CMP office reported that the CMP budget could not absorb associated costs without exceeding $1.88 billion. However, the UN General Assembly argued that the CMP office could realize further cost reductions that would enable the CMP to absorb associated costs. Secondary Data Center: In April 2009, the UN General Assembly requested that the CMP partially absorb costs associated with a secondary data center, including leasing a commercial facility and establishing a service delivery agreement to provide equipment and services. The secondary data center serves as a backup system to enable the UN to respond to emergency situations that may impair operations of critical elements of its information and communications technology infrastructure and facilities. In resolutions in April 2009 and December 2009, the UN General Assembly requested that the CMP budget absorb $16.7 million to fund the secondary data center. While the CMP nears completion of the renovation of two of the five buildings, the project has suspended the originally planned renovation of two buildings, faces risks meeting its 2014 completion date, and is projected to be approximately $430 million over budget. The CMP office may not renovate two buildings that were originally part of the scope of the project, due to the lack of a workable design solution to address security requirements. In addition, the CMP office predicts that it will complete the CMP by the end of 2014, but risks, such as a compressed schedule with work yet to be contracted, exist. Moreover, as of February 2012, the CMP office estimates that the project will be about $430 million over its approved budget of $1.88 billion—an increase of approximately 53 percent (approximately $149 million) from its last reported estimate. According to the CMP office, a number of factors, such as unforeseen conditions and complexities in the basements and Conference Building, contributed to the increase in projected cost overruns. The CMP office has proposed options to address a portion of these cost overruns; however, even if approved, additional funding will be needed to address the remainder. The United States could potentially use credits it has with the UN to fund an assessment related to the CMP. The CMP office has nearly completed the first two building renovations of the CMP—the Secretariat and Conference Buildings—which began in 2010. By February 2013, both buildings are scheduled to be completely renovated and back in use. Specifically, the CMP office plans for the Secretariat Building to be primarily reoccupied and in use by November 2012. The CMP office predicts completion of the renovation of the Conference Building by the end of 2012, with the building reoccupied and in use in February 2013. The CMP office has reported a number of other achievements of the CMP, such as: Modernizing 1 million square feet in the basements, including installation of chilled water piping, electrical conduit and wire, telecommunication conduit and copper cable. Redesigning the Conference Building to take into account enhanced security upgrades.security upgrades include two major activities: structurally enhancing the Conference Building and associated basements to withstand blast threats and installing protective structures, including bollards and gates, along the perimeter of the UN complex. The CMP office anticipates that the enhanced security upgrades will be completed by 2014. According to the CMP office, the enhanced Substantially completing the removal and replacement of the glass curtain wall in the Secretariat Building, shown in figure 2. The UN General Assembly has requested that the CMP office provide information on contracts awarded for the CMP. The CMP office posts information on contract awards on the UN Procurement Division and CMP websites. According to the CMP office, 85 percent of the value of CMP contracts has gone to U.S. firms. Security requirements and concerns have led the CMP office to suspend originally planned renovations for two buildings—the Library and the South Annex. In 2010, UN security studies found these buildings to be vulnerable to vehicle blast threats. As of April 2012, CMP officials stated that they lacked a workable design solution to address these security concerns. Specifically, according to CMP officials, the only solution to the risk of blast threats would be to close a nearby highway exit ramp. However, based on discussions between the UN and the United States, the CMP office does not view this outcome as likely. To renovate the Library and South Annex to the required security standards, CMP officials told us that they would have to demolish the buildings and begin new construction. CMP officials also told us that since they do not have a viable renovation option for these buildings, they have not updated their initial design and cost estimates. Absent a solution to the security vulnerabilities of the Library and South Annex, CMP officials told us that only limited use of the buildings would be possible. In May 2012, the CMP office reported that it plans to consult with UN departments affected by the suspension to determine where to relocate functions impacted by potentially not renovating the buildings. The CMP office expects to complete the CMP by 2014, but its schedule faces risks, such as a compressed schedule with some work yet to be contracted. As of February 2012, the CMP office estimates completing renovations by mid-2014, about 1 year behind the schedule it reported in October 2008. As shown in table 1, while the completion date for the project is still estimated to be mid-2014, the projected completion dates for key CMP activities have experienced delays for various reasons. CMP officials attribute schedule delays mostly to enhanced security upgrades added to the CMP in 2011. We reported in 2009 that security upgrades to the CMP represented a key risk to the project’s progress. According to the CMP office, implementing enhanced security upgrades to address security issues resulted in a delay of about 1 year in the schedule of the Conference Building. Although it reported a mid-2011 completion date as of October 2008, the CMP office now estimates that the Conference Building renovation will be completed in late 2012. According to the CMP office, despite delayed start dates for a number of activities, the CMP office has maintained a 2014 project completion date. However, the CMP office faces two key risks related to meeting this date: Compressed schedule. CMP officials noted that maintaining the 2014 project completion date while experiencing delays to the start dates for several projects has created a compressed schedule, which reduces the ability to develop workaround solutions if problems arise. For example, CMP officials identified the completion of the Conference Building renovation as a “critical path” of the project’s schedule, because renovations to the General Assembly Building cannot begin until those to the Conference Building are completed. Once the CMP office moves conference functions back into the Conference Building, it will reconfigure the temporary conference building to house the functions of the General Assembly Building while the General Assembly Building undergoes renovation. Previously, as a result of delays in the Conference Building’s schedule, the CMP office delayed the completion date of the General Assembly Building from mid-2013 to mid-2014. CMP officials said that the amount of time that the Conference Building renovation can be delayed without impacting the overall project’s completion date is minimal. Work yet to be contracted. The CMP office has yet to contract work for various remaining parts of the project and thus does not have agreed upon completion dates with the contractors that will be doing the work. For instance, as of March 2012, the CMP office reported that it had not committed any funds for the renovation of the General Assembly Building. CMP officials told us that conditions in the General Assembly Building—such as the potential for asbestos and weaknesses in the building’s concrete slab—also constitute potential risks. Additionally, the CMP has not fully contracted for renovation work in the basements. CMP officials have noted that renovation in the basements is linked to the overall renovations, as the basements house the infrastructure for the UN complex. CMP officials have described the work in this area as highly complex and have noted that to date it has taken longer than expected. As of February 2012, the CMP office projected total cost overruns of about $430 million over the CMP’s approved budget of $1.88 billion. According to the CMP office, the estimated cost overruns result from a number of factors, including about $266 million in direct project costs and about $164 million in scope additions authorized by the UN General Assembly to be financed from within the project’s approved budget, as shown in table 2. Projected CMP cost overruns increased significantly between May 2011 and February 2012. The UN General Assembly described the increase as “sudden and unexplained.” In October 2011, the CMP office reported that it had committed 84.5 percent of the CMP funding against the original $1.88 billion budget, which significantly reduced the risk of unexpected, adverse events during the remainder of the project. As shown in table 3, estimated cost overruns increased by approximately 53 percent (roughly $149 million) between May 2011 and February 2012, driven primarily by direct costs to the CMP. Although the increase in estimated cost overruns reported in February 2012 are attributable to the direct costs of the CMP, a portion consists of costs added to the CMP over time by the UN General Assembly without a corresponding increase in the CMP budget—such as associated costs and the secondary data center. CMP officials told us that they assume responsibility for direct costs of the CMP—which include renovation, swing space, contingency, and escalation—but have no control over additional related costs added to the CMP. In explaining the reasons for the estimated cost overruns directly attributable to the project, the CMP office cited several factors, including the following: Asbestos abatement. According to the CMP office, when the renovations began, the volume of asbestos found far exceeded its expectations. Moreover, new regulations enacted by New York City in 2010 made the abatement of that asbestos even more complicated and expensive. Unforeseen conditions in the Conference Building. The CMP office reported that the actual construction of the concrete floor slabs in the Conference Building differed from the original design drawings. The construction of the concrete floor slabs required the CMP office to amend the design of the Conference Building. As of March 2012, the CMP office reported that it expected to find similar conditions in the General Assembly Building. Complexities in the basements. The CMP office noted that work in the basements was more complex than expected due, in part, to limited documentation of the basement infrastructure and relocation of essential mechanical systems. For instance, the CMP office reported that UN documentation did not account for the large quantity of existing telephone, electrical, and security cables in the ceilings of the basements. According to the CMP office, each of these cables had to be individually tested to ensure that the CMP office did not remove active infrastructure, which was a labor-intensive process. Figure 3 shows examples of ceiling conditions in the basements before and after CMP renovations. To address cost overruns of the CMP, the CMP office recommended that the UN General Assembly endorse two financing proposals. Specifically, the CMP office proposed utilizing the working capital reserve fund and the interest income on CMP funds. As of February 2012, $45 million was available in the working capital reserve fund and the interest income amounted to $107.2 million. As of May 2012, the UN General Assembly had not made a decision to approve the use of these funds, but the Advisory Committee on Administrative and Budgetary Questions had reviewed and supported the proposals. If the UN General Assembly approves the utilization of the working capital reserve fund and the interest income, these funds will cover about a third of the projected cost overruns, but cost overruns in the amount of approximately $277.7 million will still not be addressed. The CMP office is also exploring options to further address estimated cost overruns by not fully renovating two buildings included in the original CMP renovation scope. With no solution to the security issues related to the Library and South Annex, CMP officials told us that they would propose limiting the scope of the renovations to these buildings. Rather than renovating as originally planned, the renovations to the Library and South Annex would only include connecting them to new building systems, such as heating and air conditioning. Based on the original cost estimate for these buildings, the CMP office estimates that not fully renovating the two buildings would eliminate $65 million in planned work, which could be applied to address projected cost overruns of the CMP. CMP officials also told us that they plan to explore additional opportunities to reduce work and achieve savings related to site landscaping and the General Assembly Building, but have not estimated the potential savings of these options. As shown in table 4, combining the proposed financing options with reductions in the project’s planned scope would still leave the project with a shortfall of $212.7 million. Another potential financing option is an additional member assessment. In the resolution approving the CMP, the UN General Assembly decided that, in the event of cost escalations over the approved budget of $1.88 billion, member states would be subject to a further assessment to meet the revised requirements of the CMP. The actual amount of such an assessment would depend on the decisions of the UN General Assembly regarding proposed financing and reduced scope options. The U.S. share of any future assessment would be 22 percent. One potential option for funding all or part of an additional U.S. member assessment for the CMP would be using credits in the UN Tax Equalization Fund (TEF) account—a UN fund used to reimburse U.S. nationals working at the UN for U.S. taxes paid on their UN salaries. (For more information on the UN TEF, see appendix II.) According to the UN, as of December 31, 2011, there was a balance of $134 million in TEF credits attributable to the United States. This balance remained after the UN applied $100 million in TEF credits attributable to the United States to fund the enhanced security upgrades to the CMP in 2011. Congress has since passed legislation related to the use of TEF credits. The Consolidated Appropriations Act of 2012, passed in December 2011, required that TEF credits shall only be available for the United States’ assessed contributions to the UN and shall be subject to the regular notification procedures of the Committees on Appropriations. State told us that it is complying with these provisions.2012, the U.S. Mission to the UN requested that the UN apply $13.1 million of TEF credits attributable to the United States toward the United States’ regular UN budget assessment for calendar year 2011. After the application of these credits, the balance of TEF credits attributable to the United States stood at $120.9 million, as of May 2012. However, under this policy, TEF credits could be used to fund cost overruns of the CMP if the cost overruns are funded through a member assessment as called for by the resolution approving the CMP. In April 2012, the UN General Assembly issued a resolution expressing concerns regarding the transparency, timeliness, and clarity of the CMP’s February 2012 cost estimates. To address these concerns, the UN General Assembly requested that the CMP office improve reporting on the underlying causes of the projected CMP cost increases. While the UN General Assembly resolution did not specifically identify how the CMP office should report its future cost estimates, we have identified best practices associated with high-quality and reliable cost estimates. Applying these best practices, as appropriate, may address the UN General Assembly’s concerns regarding CMP cost estimates. After evaluating the CMP office’s February 2012 cost information, the UN General Assembly reported a number of concerns with these estimates, such as a lack of transparency, timeliness, and clarity. For example, with regard to transparency, member states inquired why the CMP office did not include $38 million in increased swing space leasing costs in earlier CMP cost estimates. The CMP office noted that it negotiated swing space leases for a period longer than necessary to mitigate the risk of CMP schedule delays. The CMP office did not include these costs in its earlier estimates because it assumed these leases could be terminated early or used by other UN departments in the event the CMP project no longer needed the swing space. According to the CMP office, in a healthy rental market, early termination or subleasing is common; however, the economic downturn prevented it from taking such actions. In addition, member states inquired about the main factors that led to the projected increase in cost overruns. According to the CMP office, a key factor of the projected cost overruns was increased asbestos abatement costs related to asbestos found in the basements and Conference Building in late 2011. However, the CMP office had previously reported that all asbestos was abated from Conference Rooms 1-3 of the Conference Building in February 2011. Further, the 2011 CMP annual report considered the abatement of asbestos and the removal of obsolete materials from the Secretariat and Conference Buildings a significant achievement. For additional information regarding the concerns of the UN General Assembly and issues raised by member states, see table 5. Officials from the U.S. Mission to the UN (USUN) also raised concerns with the explanation of the projected CMP cost overruns, both during and at the conclusion of the March 2012 session. For example, a U.S. representative at the March 2012 session asked about the amount and utilization of the remaining contingency fund for the CMP. While the CMP office reported that $89.1 million remained in funds for contingency and price escalation, this amount was as of May 2011, before the increase in estimated cost overruns reported in March 2012. Moreover, USUN officials told us that despite the briefings and information provided by the CMP office, there was still insufficient information as to why and when the projected cost overruns occurred. While CMP officials told us that they could not currently quantify the individual cost drivers of the $149 million increase in projected cost overruns that occurred between May 2011 and February 2012, they stated that the February 2012 estimates were the best available. Further, they noted that it is difficult to attribute the causes for cost overruns to specific buildings. For example, asbestos abatement is a campus-wide activity that affects the cost of all building renovations. After evaluating CMP cost estimates, the UN General Assembly issued a resolution in April 2012 requesting that the CMP office produce additional reporting related to CMP costs. Specifically, the UN General Assembly requested more information on the underlying causes of the projected cost increases and practical options to address them. While the UN General Assembly resolution did not explicitly identify how the CMP office should report future cost information, we have found that a high-quality and reliable cost estimate should exhibit certain best practices, including being comprehensive, well-documented, accurate, These best practices include elements for documenting and credible.and reporting cost estimates. For example, a cost estimate that is well- documented and accurate should allow for the cost estimate to be traced back to and verified against its sources and explain the variances between planned and actual costs. These best practices may also help address some of the concerns raised by the UN General Assembly regarding the CMP’s cost estimates. For example, using the best practices associated with a well-documented cost estimate can improve an estimate’s transparency, by capturing in writing such things as the source of the data used, the calculations performed, and the rationale for choosing particular estimating methods. Table 6 shows how the concerns of the UN General Assembly regarding the CMP’s cost estimates could be addressed by using our best practices, as well as the potential benefits of this approach. CMP officials told us that they plan to present the additional information requested by the UN General Assembly in fall 2012. Applying these best practices, as appropriate, may help the CMP office as it prepares updated cost materials. To address its future office space needs, the UN is considering the option of a new building that would be separate from the CMP, but it does not have an estimate of the project’s costs. The UN estimates that its office space needs will exceed the capacity of its current real estate portfolio by 2023, due primarily to expiring leases. As a potential solution, the City and State of New York have proposed the construction of a new office building, to be located across the street from UN headquarters, known as the consolidation building. This proposal requires UN General Assembly approval, but the UN has not entered into any formal agreements regarding the building and the current lack of a cost estimate makes its cost implications for the UN and its member states unclear. We have previously reported that reliable cost estimates are critical to program success, including informed resource investments. In September 2011, the Office of the Secretary-General completed a report on future office space accommodation needs for UN headquarters. The study estimates that, as of 2014, its real estate portfolio in New York will consist of approximately 3.4 million square feet of space—about 39 percent owned and 61 percent leased. The UN headquarters campus comprises the majority of the UN’s owned space, with office space in the Secretariat Building, Conference Building, Library, basements, and General Assembly Building. The UN also leases space in various locations around its headquarters campus to accommodate staff that cannot be housed in its owned space. However, due to the combination of expiring leases and estimated staff growth, the Secretary-General’s report estimates that by 2023 the UN’s office space needs will exceed the capacity of the owned and leased buildings currently in its real estate portfolio. Leases for the UN’s two largest leased office spaces expire at the end of March 2018, with options to extend to the end of March 2023, but no renewal options beyond that date. The UN Development Corporation (UNDC)—a public benefit corporation of the State of New York whose mission is to provide office space and other facilities to help meet the current and future space needs of the UN—constructed these buildings in 1976 for use by the UN. The buildings provide approximately 670,000 square feet of office space, housing about 2,500 staff. The UN currently leases these buildings at below-market rates. According to the Secretary-General’s report, renegotiating the leases beyond 2023 would likely result in lease rates set at market rates, rather than the favorable below-market rates currently Additionally, the Secretary-General’s report projects enjoyed by the UN.that headquarters staff levels will increase from 10,711 in 2014 to 11,911 in 2023. Using the report’s estimate that each additional staff person requires an additional 250 square feet of space per person, this increase will require an additional 300,000 square feet of office space. We have not independently verified the report’s per person space estimate. However, in October 2011, the UN’s Advisory Committee on Administrative and Budgetary Questions found that a more in-depth and comprehensive analysis of the factors affecting the UN’s space requirements was needed. The City and State of New York have initiated a proposal to construct a new office building—known as the consolidation building—that could help the UN address some of its long-term office space needs, but the UN has not entered into any agreements on the proposal. In July 2011, the Governor of the State of New York signed legislation authorizing the City of New York to transfer parkland to UNDC to construct a new office building for the UN as large as 900,000 square feet and located across the street from UN headquarters. In October 2011, key officials of the City and State of New York entered into a memorandum of understanding (MOU) regarding the consolidation building. The MOU, to which UNDC consented, obligates UNDC to specific actions, including initial funding for and issuance of bonds to finance the project. Per the MOU, the property will not convey to the UN until UNDC and UN reach agreement on the terms, with a deadline of December 31, 2015. According to UNDC officials, they would like to receive agreement from the UN by early 2014. UN officials told us that they were informed of the MOU by UNDC officials shortly before it was finalized and signed, but have not entered into a formal agreement regarding the consolidation building. UN officials stated that they did not see the MOU prior to the City and State of New York signing it in October 2011 and therefore had no input to the document. Moreover, while the UN is not a party to the MOU, the document contains requirements to which the UN must agree for the consolidation building to move forward. For example, the UN would have to agree to lease the new office building from UNDC, potentially in a lease-to-own or similar arrangement. UN officials expressed concern that some of the terms of the MOU could increase costs and risks to the UN. For instance, according to UN officials, leasing the building would likely require the UN to pay an amount roughly equivalent to the bonds issued by UNDC to design and construct the consolidation building. UN officials told us that since they will not know the potential lease costs until the bonds are issued, they would like the option to opt out of the project upon review of the potential costs. Additionally, according to the MOU, as a condition of agreeing to lease the consolidation building, the UN would have to extend the leases at two of its largest leased spaces at increased rental rates and with additional costs. For instance, according to the terms of the UN’s current lease, its rates will increase from $27.50 per square foot (about $18.2 million per year) to $30 per square foot (about $19.8 million per year) if the organization exercises the option to extend its lease from 2018 to 2023. However, according to UN officials, under the MOU, the UN would have to extend the leases from 2018 to 2023 and its rates could rise to market rates, estimated by the UN to be approximately $77 per square foot. Additional costs include an amount equal to real estate taxes attributable to the space, which UN officials said was not originally included in the lease renewal terms. Finally, UN officials cited concerns related to “risk sharing” in the proposal. Specifically, officials expressed concern that the proposal places the entire risk for the cost of the project on the UN, rather than sharing the risk between the UN and UNDC. UN officials told us that they continue to discuss the consolidation building and its potential costs with UNDC officials. However, as of June 2012, the UN had not entered into a formal agreement regarding the consolidation building. UN officials told us that the UN General Assembly’s Fifth Committee, which reviews administrative and budgetary issues, plans to discuss options related to the consolidation building at its fall 2012 session. While the UN has held discussions with UNDC, neither organization has completed a cost estimate for the consolidation building. In October 2011, the UN’s Advisory Committee on Administrative and Budgetary Questions reviewed the Secretary General’s office space study. The committee noted that future UN space requirements could vary significantly depending on the underlying assumptions for estimating staff growth and space allowance per person, as well as alternative workplace policies. The committee also concluded that it was not fully convinced of the assumptions used to establish the baseline estimates of the UN’s future office space requirements. Moreover, the committee stated its desire to compare all potential options for future office space accommodation, and recommended that the Secretary-General complete a detailed cost analysis of the consolidation building comparing the potential cost of the building to other options. We have previously reported that a reliable cost estimate is critical to the success of any program. Such an estimate provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction when warranted, and accountability for results. While the UN’s recommendations did not clarify what to include in the cost estimate for the consolidation building, our research has identified a number of best practices that form the basis of effective program cost estimating and should result in reliable and valid cost estimates that management can use for making informed decisions. As noted earlier, a high-quality and reliable cost estimate is comprehensive, well-documented, accurate, and credible. For example, a comprehensive cost estimate should include all life-cycle costs of a project, document all cost-influencing ground rules and assumptions affecting the estimate, and completely define the program and its schedule, among other best practices. See table 7 for the best practices associated with a high-quality and reliable cost estimate. UN officials told us that they plan to conduct a cost analysis of the consolidation building. However, as of June 2012, the UN had not completed such an estimate. A cost estimate using our best practices could assist the UN in predicting the level of confidence in meeting the project’s budget by quantifying risks and uncertainties associated with the project. Such an estimate gives decision makers perspective on the potential variability of the estimate, should facts, circumstances, and assumptions change. We have found that, without the ability to generate reliable cost estimates, projects risk experiencing cost overruns, missed deadlines, and performance shortfalls. As a result, absent a completed cost estimate for the consolidation building, the potential cost implications for the UN and its member states are not clear. As the CMP nears completion of the renovations of the Secretariat and Conference Buildings, the project is estimated to be approximately $430 million over budget and risks remain as some renovations have yet to begin. Financing options exist to address a portion of the projected cost overrun; however, the United States and other UN member states may be asked to provide an additional assessment to finance the remainder. Aware of this risk, the UN General Assembly has requested that the CMP produce additional reporting on its costs. We have found that the best practices of developing high-quality and reliable cost estimates help inform decisions to manage capital projects effectively. Given the cost overruns and challenges of the CMP, as well as the risks and unknown costs associated with the UN’s potential consolidation building project, these practices should be used to enhance the CMP’s future cost estimates and to develop cost estimates of prospective projects to address the UN’s long-term space needs. Such an approach would likely improve the quality and reliability of cost information provided to the UN and its member states, as well as help decision makers evaluate costs and risks associated with these projects. To improve the quality and reliability of information provided to the UN and its member states, we recommend that the Secretary of State and U.S. Permanent Representative to the United Nations work with other member states to take the following two actions: 1. Direct the CMP office to implement, as appropriate, GAO’s best practices for cost estimation when it updates information on CMP costs. 2. Direct the UN to ensure the development of a cost estimate for the consolidation building utilizing GAO’s best practices for cost estimation. We provided a copy of this report to State and the UN for review and comment. State and the UN provided written comments, which are reproduced in appendixes III and IV, and technical comments, which we have incorporated as appropriate. State concurred with our recommendations and expressed its concern that projected cost overruns of the CMP had grown to approximately $430 million. State also noted that it is not actively considering the use of TEF credits to address a U.S. share of a potential additional assessment for the CMP since member states have yet to decide on proposed funding options to address cost overruns. However, given that the estimated cost overruns of the CMP would still be approximately $212.7 million even if the UN approves the use of proposed funding sources, we maintain that an additional member assessment may be needed and that TEF credits attributable to the United States are a possible source of funding such an assessment. The UN noted that our report was an accurate assessment of the status of the CMP and that it provided constructive recommendations. We are sending copies of this report to interested congressional committees, the Secretary of State, the U.S. Mission to the United Nations, and the UN. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Thomas Melito at (202) 512-9601 or melitot@gao.gov, or David Wise at David Wise, (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report provides information on the progress of the United Nations (UN) Capital Master Plan (CMP) and the status of the UN consolidation building. Specifically, we examine (1) the extent to which the CMP is meeting its planned renovation scope, schedule, and budget; (2) the UN General Assembly’s evaluation of CMP cost estimates; and (3) the status of the UN consolidation building project. To address our objectives, we reviewed and analyzed relevant planning, schedule, and budget documents related to the CMP, as well as relevant planning and legal documents related to the consolidation building. Additionally, we discussed the progress, plans, risks, and costs of the CMP and consolidation building project with officials from the Department of State’s (State) Bureau of International Organizations, the U.S. Mission to the UN, New York City, and UN offices, including the CMP office and Central Support Services. We also discussed efforts related to the consolidation building project with the UN Development Corporation, a public benefit corporation created to develop and operate office space for the benefit of the UN. We focused on these agencies because they are involved in the efforts of the CMP and the UN consolidation building project. To examine the extent to which the CMP is meeting its planned renovation scope, schedule, and budget, we analyzed documents such as CMP annual reports, UN Board of Auditors reports on the CMP, and UN General Assembly resolutions. We compared current planned renovation scope, projected completion dates, and cost estimates with previously reported scope, schedule, and budget projections. For our baseline comparison, we referred to UN General Assembly resolutions that approved the planned renovation scope and schedule from accelerated strategy IV in 2007 and the $1.88 billion budget for the CMP in 2006. Further, we examined other relevant CMP documentation, including information on risk assessments, monthly reports, and procurement information. To understand the project’s cost estimates, we examined materials provided by the CMP office to the UN General Assembly’s Fifth Committee documenting the project’s financial condition as of February 2012, and analyzed reports on CMP progress and associated costs produced by the Advisory Committee on Administrative and Budgetary Questions and the Program Planning and Budget Division. We also discussed these costs and the CMP’s integrated master schedule with CMP officials. To understand options for funding projected CMP cost overruns, we reviewed UN Financial Rules and Regulations, UN Financial Report and Audited Financial Statements, and relevant congressional requirements in Appropriations Law, such as the Consolidated Appropriations Act of 2012. Further, we held discussions with officials from the CMP office, the UN Program Planning and Budget Division, UN Board of Auditors, and State’s Bureau of International Organizations to understand the various options that the United States could utilize to finance its portion of projected CMP cost overruns. We also traveled to New York City, New York, to tour the renovation sites and observe the progress of the CMP. During these visits, we met with officials from the CMP office, various UN departments—Program Planning and Budget Division, Board of Auditors, Office of Internal Oversight Services—and the U.S. Mission to the UN to discuss the ways in which the CMP is meeting its planned renovation scope, schedule, and budget. To examine the UN General Assembly’s evaluation of CMP cost estimates, we reviewed and analyzed documents provided by the CMP office to the UN General Assembly’s Fifth Committee describing the project’s financial condition as of February 2012, UN General Assembly resolution 66/258 issued in April 2012, the 2011 CMP annual report proposing financing options, and the Advisory Committee on Administrative and Budgetary Questions report A/66/7/Add.11 on costs of the CMP. Further, we analyzed the extent to which best practices for cost estimating from our Cost Estimating and Assessment Guide could potentially address concerns raised by the UN General Assembly with regard to the cost information provided by the CMP office. We did not conduct a full assessment of the CMP’s February 2012 cost estimates, as (a) the estimates were updated projections provided in response to questions from the UN General Assembly’s Fifth Committee during briefings, rather than comprehensive cost estimates; and (b) the CMP office intends to provide a full report on the project’s costs, including new cost estimates, in fall 2012. Although we did not audit the CMP cost data and are not expressing an opinion on them, based on our examination of the documents received and our discussions with cognizant officials, we concluded that the data were sufficiently reliable for the purposes of this engagement. We also held discussions with officials from the CMP office, UN Program Planning and Budget Division, UN Board of Auditors, and the U.S. Mission to the UN on a number of factors affecting CMP cost estimates. To examine the status of the UN consolidation project, we analyzed the memorandum of understanding (MOU) signed between the City and State of New York to identify actions required by the MOU. Additionally, we reviewed UN documents such as the Secretary-General’s Feasibility Study on the United Nations Headquarters Accommodation Needs 2014- 2034 and a related report by the Advisory Committee on Administrative and Budgetary Questions to understand the UN’s long-term office space needs. We conducted interviews with officials from New York City, the UN Development Corporation, and the UN regarding negotiations related to the consolidation building and lease costs for buildings potentially affected. Further, we reviewed how our best practices for cost estimating could provide insight on potential project costs to inform UN decision making. We conducted our work from January 2012 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our work to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. The United States annually pays assessed contributions to the UN General Fund to support the UN’s programs and activities. One of these activities is a staff assessment, which is an amount deducted from the gross pay of all UN employees and used to fund the UN Tax Equalization Fund (TEF). The UN established the TEF to equalize the net pay of all UN staff members whatever their national tax obligations. While most UN employees are exempt from paying income tax on their UN earnings in their home country, some UN employees, including U.S. nationals, are not. For member states that levy income taxes on the earnings of UN employees, such as the United States, contributions to the TEF are first used to reimburse UN employees for the taxes they paid on their UN income. Unused TEF credits remain as a balance in a member state’s TEF account. The UN reports TEF credits on a biennial basis. According to U.S. and UN officials, various factors, such as modifications in U.S. tax laws or changes in the number of U.S. employees at the UN, can result in TEF credits or debits in a member state’s account. As shown in table 8, credits in the TEF attributable to the United States and reported by the UN rose by over $160 million between 2001 and 2009—from $17.6 million to $179 million. If a member state’s TEF account has a balance, the Financial Rules and Regulations of the UN state that such a balance shall be credited against the mandatory assessed contributions due from that member state the following year. However, notwithstanding UN financial regulations that TEF credits should be applied toward a member state’s assessed contributions, TEF credits attributable to the United States were applied to fund enhanced security upgrades to the CMP. In October 2010, the UN requested State’s endorsement of the use of up to $100 million of TEF credits accrued in prior years. In a January 2011 letter to the UN, State acknowledged the UN’s use of up to $100 million in U.S. TEF credits described as “attributable to annual U.S. regular budget contributions” to fund the enhanced security upgrades. This transaction differs from previous uses of TEF credits. For example, State has previously requested that TEF credits be applied toward assessed contributions for the UN. Specifically, we reported that in 1997 the U.S. payment for its regular budget assessment included a $27.3 million credit from surplus funds in the TEF.TEF credits attributable to the United States were previously applied toward U.S. assessed contributions; however, they noted that in the case of the enhanced security upgrades the credits were used for a different purpose. 1. We maintain that an additional member assessment may be needed and that Tax Equalization Fund credits attributable to the United States remain a possible source of funding for such an assessment. Given the Capital Master Plan’s (CMP) projected cost overruns of approximately $430 million, even if the United Nations (UN) General Assembly approves the use of proposed funding sources and reductions in planned renovations, the estimated cost overruns of the project would still be $212.7 million. In the event of cost escalations over the approved budget of the CMP, the UN General Assembly decided that member states would be subject to a further assessment. The U.S. share of any future assessment would be 22 percent. 2. Our report makes clear that the CMP project is separate from the consolidation building proposal. However, we maintain that regardless of whether the UN directly manages the construction of the consolidation building, a sound cost estimate should be developed as the UN will be responsible for financing the building should it agree to its construction. In addition to the contacts named above, Maria Edelstein, Assistant Director; Biza Repko; Adam Yu; Mark Dowling; Debbie J. Chung; Jason Lee; and Karen Richey made key contributions to this report. Joshua Ormond provided technical assistance. United Nations: Renovation Still Scheduled for Completion in 2013, but Risks to Its Schedule and Cost Remain. GAO-09-870R. (Washington, D.C.: July 30, 2009). United Nations: Renovation Schedule Accelerated after Delays, but Risks Remain in Key Areas. GAO-08-513R. (Washington, D.C.: April 9, 2008). Update on the United Nations’ Capital Master Plan. GAO-07-414R. (Washington, D.C.: February 15, 2007). United Nations: Renovation Planning Follows Industry Practices, but Procurement and Oversight Could Present Challenges. GAO-07-31. (Washington, D.C.: November 16, 2006). United Nations: Early Renovation Planning Reasonable, but Additional Management Controls and Oversight Will Be Needed. GAO-03-566. (Washington, D.C.: May 30, 2003). United Nations: Planning for Headquarters Renovation is Reasonable; United States Needs to Decide Whether to Support Work. GAO-01-788. (Washington, D.C.: June 15, 2001).
In December 2006, the UN approved a $1.88 billion CMP to modernize its headquarters in New York City by 2014, with a scope to include the renovation of five buildings. Separately from the CMP, the UN is also considering the option of a new office building, known as the consolidation building, to be located across the street from UN headquarters. As the UN’s largest contributor, the United States has a significant interest in these projects. GAO was asked to report on (1) the extent to which the CMP is meeting its planned renovation scope, schedule, and budget; (2) the UN General Assembly’s evaluation of CMP cost estimates; and (3) the status of the consolidation building project. To perform this work, GAO reviewed cost and schedule documents for the CMP, as well as planning and legal documents for the consolidation building; examined relevant UN financial documents and UN General Assembly resolutions, as well as GAO’s best practices for cost estimation; and met with officials from the Department of State (State), the UN CMP office and other relevant UN departments, and New York City. The Capital Master Plan (CMP) has made progress, but may not deliver the project’s original scope, faces risks meeting its scheduled completion date, and is projected to be about $430 million over budget as of February 2012. Regarding the project’s scope, the CMP office may not renovate the Library and South Annex—two of the five buildings in its original scope—due to the lack of a workable design solution to address security concerns. Related to schedule, the CMP office expects to complete the CMP in 2014, but reports that previous schedule delays have reduced its ability to respond to unforeseen events without affecting the project’s end date. According to the CMP office, the project’s approximately $430 million in projected cost overruns are due to a number of factors, including about $266 million in direct project costs and over $164 million from scope additions authorized without a corresponding increase in budget by the United Nations (UN) General Assembly. The CMP office has proposed financing options that could address a portion of these cost overruns. However, even if approved, an additional member assessment may be needed. One option for funding the U.S. portion of an additional member assessment is the use of credits attributable to the United States in the UN Tax Equalization Fund (TEF)—a fund used to reimburse U.S. nationals working at the UN for taxes paid on their UN salaries. According to the UN, as of May 2012, the balance of TEF credits attributable to the United States stood at $120.9 million. After evaluating the CMP’s cost estimates, the UN General Assembly issued a resolution in April 2012 stating that the estimates lacked transparency, timeliness, and clarity. For example, the UN General Assembly expressed concern about the lack of clarity regarding the renovation of the Library and South Annex buildings. Specifically, member states inquired about the schedule for the two buildings and why renovations to the buildings were delayed. To address these concerns, the UN General Assembly requested that the CMP office improve reporting on projected CMP cost increases. While the UN General Assembly resolution did not specifically identify how the CMP office should report its future cost estimates, GAO has identified best practices for high-quality and reliable cost estimates. For instance, a well-documented cost estimate should describe in detail how the estimate was developed and the methodology used. Applying these best practices, as appropriate, could address the concerns raised by the UN General Assembly regarding the CMP’s cost estimates. To address its future office space needs, the UN is considering the option of a new building that would be separate from the CMP, but it does not have an estimate of the project’s costs. The UN estimates that by 2023 its office space needs will have exceeded the capacity of its current real estate portfolio, primarily due to expiring leases. As a potential solution, the City and State of New York have proposed the construction of a new building known as the consolidation building. The UN has indicated its willingness to consider this proposal, but has not entered into any formal agreements. The current lack of a cost estimate for the consolidation building makes its cost implications for the UN and its member states unclear. GAO has previously reported that cost estimates are critical to program success, such as informed resource investments. The Secretary of State and the U.S. Permanent Representative to the United Nations should work with other member states to direct the CMP office and the UN to utilize best practices identified by GAO when developing cost estimates for the CMP and the consolidation building. State and the UN concurred with GAO’s recommendations.
Before discussing the specifics of DOE trade missions, I would first like to provide some context by reviewing DOE’s statutory authority for conducting overseas trade missions and its role within the federal export promotion apparatus. According to DOE, the Secretary was given explicit statutory authority to undertake export promotion activities under various legislative enactments, including the Export Enhancement Act of 1992 and the Energy Policy Act of 1992. We have reviewed this legislation and agree that the Secretary has the authority to conduct export promotion activities, including trade mission activities. Regarding its role in the federal export promotion apparatus, DOE is a member of the interagency TPCC, whose role is to increase the effectiveness and coordination of all activities involving government promotion of exports. TPCC is chaired by the Commerce Department and is comprised of 19 federal agencies. According to the TPCC’s latest annual report, DOE funded about $14 million for export promotion in fiscal year 1995, making it one of the smallest TPCC players in terms of funding.Federal export promotion funding totaled about $3.1 billion in fiscal year 1995. Three federal agencies—the U.S. Department of Agriculture, the U.S. Export-Import Bank (Eximbank), and the Department of Commerce—accounted for about 90 percent of all federal export promotion funding for fiscal year 1995. DOE’s high-level advocacy on behalf of U.S. energy companies is conducted in emerging energy markets like China, India, and Pakistan. According to DOE, each of these countries will need new sources of energy in the coming years, representing a huge potential market for U.S. businesses. For example, DOE anticipates that China will need an estimated 100,000 megawatts of new electric power generation over the next 5 years, with each new 1,000-megawatt power plant generally valued at $1 billion. In addition, India is expected to need more than an estimated 140,000 megawatts of new electric power by 2007, requiring an investment of about $200 billion. According to DOE, overall, Asian economies alone are expected to spend as much as $1 trillion on power-related infrastructure over the next 15 years, and U.S. cutting-edge technologies in the electric power, renewable energy, and energy efficiency fields provide important opportunities for the United States to compete for this business. DOE’s high-level advocacy is also a response to similar advocacy efforts that foreign governments conduct in energy markets. TPCC reports that competitor industrialized nations perform similar export promotion activities and that foreign governments are increasingly aggressive in helping their firms compete for major projects in foreign markets. Foreign governments use a variety of tactics, including performing high-level advocacy, providing project financing (including low-interest-rate loans and corporate financial assistance), and making promises of technology transfer and aid funds in order to obtain projects for their own companies. For instance, in January 1996, the Canadian Prime Minister and 7 ministers took 300 business representatives from a variety of industry sectors to India. Advocacy is not just limited to our major industrialized competitors. In August 1995, a Malaysian cross-sectoral trade mission of 250 high-level government officials and business executives visited South Africa at the same time that the U.S. Secretary of Energy’s trade mission was visiting the country. In general, several factors make it difficult to quantify the precise impact of federal advocacy activities: (1) The determination of whether the sales generated through trade missions are additional to what would have been exported in their absence is not always clear. (2) Numerous participants (U.S. government agencies as well as foreign governments) may be involved in a single project. This makes it difficult to identify and isolate the contribution of any one participant. (3) Figures used to quantify the success of trade missions, particularly if they are based on tentative business agreements such as letters of intent or memorandums of understanding, may be speculative. (4) The calculation of the value of follow-on sales agreements and maintenance contracts that can flow from the introduction of U.S. engineering and technological standards is difficult. These sales can be as significant in monetary terms as the original sales contract. TPCC has recognized some of the difficulties in measuring the results of export promotion programs and has tasked a TPCC working group to develop better performance measures for these activities. An update of working group activities will be provided in the next TPCC annual report due for release in September 1996. DOE has also recognized some of the uncertainties associated with this issue and is now reviewing its estimation practices. Despite the difficulties in measuring the impact of federal advocacy activities, DOE has reported the results of its advocacy based on the value of signed business agreements. In a December 28, 1995, letter to the Chairman of this Committee, the Secretary of Energy stated that the Secretary’s four trade missions resulted in $19.7 billion in potential and finalized agreements. These agreements include memorandums of intent or understanding (the first and necessary step to any business deal), fuel supply and power purchase agreements for power plants, oil and gas exploration and production agreements, and other steps necessary to advance business deals. According to DOE, this was the total estimate of deals signed, as reported by the U.S. companies on these missions. As you requested, we reviewed DOE’s estimates of the impact of its advocacy. In response, I would like to clarify what the $19.7 billion is and what it is not. The $19.7 billion is the total potential value of business agreements signed during the four trade missions led by the Secretary, two follow-up trade missions that were led by the Secretary or Deputy Secretary, and several follow-up visits of foreign trade delegations to the United States (see app. I). The $19.7 billion estimate is not the finalized value of deals to the United States or the value of U.S. exports. Moreover, for some of the agreements that have been finalized, the U.S. export value is substantially less than 50 percent of the project’s total exports. DOE has reported that of the $19.7 billion in agreements, about $2.03 billion in business agreements have reached either “financial closure” or “sales agreement,” that is, have been finalized. In an effort to clarify what this number represents, we conducted an independent review of the 14 business deals that DOE used as the basis for the $2.03 billion estimate (see app. II). As part of this process, we reviewed DOE documents and interviewed government officials. We also interviewed business representatives from most of these companies and studied their written responses to questions posed by this Committee. We studied related business filings, annual reports, and business journal articles for these deals as well. Although we are including private-sector estimates of the potential value of U.S. exports associated with these deals, we caution that these projections are inherently uncertain. Our review of the likely composition of the 14 deals makes it clear that the $2.03 billion figure that DOE reported should not be confused with the potential U.S. export value of the deals. For example, the largest single deal reported by DOE is a $660-million power project in Pakistan with an estimated U.S. export value of about $218 million (over 30 percent of the total project value), which represents virtually all of the total exports associated with the project, according to Eximbank officials. The Eximbank provided financing for this project. In some of the cases, the U.S. export value is substantially less than 50 percent of the total exports associated with the agreements. For example, three power plant projects valued at about $950 million comprise about 47 percent of the $2.03 billion: Two power projects in Pakistan, sponsored by the same company, have a total value of $700 million and estimated exports of $400 million. The estimated U.S. export value is about $80 million (20 percent), according to company officials and the financing documents we reviewed. Japan’s Export-Import Bank and Mitsubishi Heavy Industries are major participants in financing and constructing these projects, which suggests that Japanese companies will receive a significant share of the sales. One $250-million power plant in India has estimated exports of about $160 million. The estimated U.S. export value is about $40 million (25 percent), according to a company official. The U.K.’s Export Credit Guarantees Department and the U.K.’s Rolls Royce company are major participants in this project, which suggests that U.K. companies will receive a significant share of the sales. While examining these U.S. export content issues, we noted that DOE does not have or use guidelines that specifically incorporate U.S. content considerations as a basis for selecting businesses on DOE-led trade missions. The Commerce Department has developed advocacy guidelines to assist U.S. government personnel in determining whether and to what extent U.S. government support is appropriate in advocating for individual projects. Given the increasingly complex nature of international transactions, the Commerce Department guidelines were developed in 1993 to assist U.S. government officials in making these determinations. The guidelines place a premium on U.S. content, including employment, in the determination of whether and to what extent a given project is considered to be in the national interest. Company representatives that participated in the missions generally supported the Secretary’s efforts. Although several of the company officials we interviewed said their completed business agreements would have occurred without DOE’s involvement, many also said that their projects were accelerated as a result of the trade missions. Others, including some Commerce Department officers stationed in the four overseas posts that DOE visited, cited such intangible benefits as increased credibility with foreign officials and the opportunity to establish new or high-level contacts with business and government officials. Now let me turn to the administration of DOE’s trade missions. The procedures that DOE used for chartering aircraft, recovering costs from nonfederal participants, approving the travel expenses of certain nonfederal travelers, and obtaining services from U.S. embassies were weak. These procedures have been the subject of critical reports from our office and the DOE Inspector General (IG). Our recent work highlights issues of continuing concern. According to program officials, the planning for these missions was complicated by time constraints and frequent, last-minute changes in plans. These planning difficulties were further compounded by DOE’s lack of familiarity with the requirements for conducting large, overseas trade missions. We noted that the Secretary’s first trade mission, the mission to India, took place less than 2 months after President Clinton made a commitment to send a high-level mission to India during Prime Minister Rao’s May 1994 state visit. DOE’s second trade mission, to Pakistan, took place less than 3 months after the India trip. According to DOE officials, “heroic” efforts were sometimes needed to overcome the ad hoc planning process to ensure that the missions were completed on schedule. DOE has recognized these inadequacies and in March 1996 introduced some new, interim international travel policies and procedures to address these management weaknesses. These new procedures are designed to help assure that DOE’s future international missions are more cost-effective and better managed, but they have yet to be fully tested in practice. A DOE official told us that DOE believes that the newly designed procedures are adequate to ensure that taxpayers’ interests are protected. The costs of air transportation services represent the largest expense of the four DOE missions. DOE’s total cost of the four missions was about $2.8 million (see app. III). According to program officials, DOE used an evolving process for obtaining air transportation services for the four trade missions. For the July 1994 India trip, DOE used a Department of Defense VC-137, the military version of the Boeing 707. DOE managed the fare collections from the non-DOE passengers. Passengers were billed after the trip was completed. For the September 1994 Pakistan trip, DOE chartered a DC-8 through a charter agent. DOE used a Department of the Interior working capital fund as the mechanism to pay for the charter aircraft and to collect fares from the federal and nonfederal travelers. For the February 1995 China trip, DOE’s contract travel agency, Omega Travel, chartered a DC-8 through a charter agent. DOE assisted Omega in chartering the aircraft and collecting the fares from the nonfederal passengers. For the August 1995 South Africa trip, DOE chartered a DC-8 through a charter agent. The charter agent managed the fare collections for all passengers. Government Transportation Requests were used as the vehicle for paying DOE’s costs of the charter aircraft. DOE justified the use of charter aircraft for the trade missions because of a special need for planning and conferencing facilities during enroute travel. According to DOE, no scheduled commercial airline service could fulfill this need. In at least one instance, DOE did not fully comply with the requirements of federal regulations devised to help ensure the efficient and effective management and use of government aviation resources. Provisions of the Federal Property Management Regulations require advance written approval for travel on government aircraft by DOE’s General Counsel or his principal deputy on a trip-by-trip basis. Although such approval was obtained for the India and South Africa trip, it was not obtained for the Pakistan trip or the China trip. DOE acknowledged that prior written approval should have been obtained for the Pakistan trip. DOE officials said prior written approval was not needed for the China trip because it did not involve the use of a DOE-chartered aircraft but instead the DOE purchase of seats for federal travelers from a General Services Administration (GSA) contractor. DOE stated that GSA advised DOE at the time that the regulatory requirement for General Counsel approval was not applicable to this situation. It is clear that using military and charter aircraft added to the costs of the trips. We compared the government cost of using charter aircraft to regularly scheduled commercial air service using cost estimates and related information developed by DOE before each trip. We estimate that the decision to use the military and charter aircraft increased the cost to the government by at least $588,435 (i.e., the savings if the government-funded travelers had used commercial air carriers for each of the four trade missions (see app. IV)). DOE said that security considerations on the India trip and the need for conferencing facilities on all the missions precluded the use of commercial aircraft. DOE efforts to recover costs from the trade missions’ nonfederal participants have also been problematic. Although DOE established a policy of full-cost recovery after the India trip, it has yet to completely realize this goal, as of March 26, 1996. It still has a total of $50,646 in accounts receivable from the first two trips ($29,646 from the Pakistan trip). On the last two missions, collecting fares was the responsibility of the company that chartered the aircraft. I would also like to point out that DOE paid $50,595 to cover the additional cost of a scheduled trip to Kimberley and the cost of an unplanned stop in Capetown on the South Africa trip. None of these costs were passed on to the other nonfederal travelers. A DOE official said DOE did not attempt to recover the additional costs because DOE was responsible for making the decisions that added to the costs. They also said they would face a loss of credibility with the U.S. business community if they attempted to recover the additional costs of these trips after the travelers had already been billed. I would now like to take a few moments and discuss DOE’s handling of “invitational travelers” on its trade missions. The term “invitational traveler,” as used in this testimony, refers to those nonfederal travelers who participated in the missions and had their travel expenses paid for by DOE (see app. V). This term does not refer to the private sector representatives who participated on these missions and paid their own way. The regulations governing DOE’s payment of travel expenses of “invitational travelers” are contained in 10 C.F.R. Part 1060. The regulations state that DOE may pay the travel expenses of a nonfederal traveler provided that the person receives an invitation from DOE to confer with a DOE employee “on matters essential to the advancement of DOE programs or objectives.” If the meetings occur at a place other than the conferring employee’s post of duty, a principal departmental officer (the DOE Secretary, Deputy Secretary, or Under Secretary) must have approved and stated the reasons for the invitation in writing before the travel takes place. The regulations also permit payment of such travel expenses where a principal departmental officer has determined in writing that “it is in the interest of the Government to provide such payment,” and DOE’s General Counsel has determined in writing that the payment is authorized by statute. The duties to be performed by a principal departmental officer cannot be delegated. In 77 percent (17 of 22) of the cases, DOE did not provide documentation showing prior written justification for the invitational travelers. In their comments on this testimony, DOE pointed out that some documents existed indicating Office of the Secretary approval, but DOE agrees it was not in complete compliance with 10 C.F.R. Part 1060. In our January 1996 testimony before this Committee, we highlighted some of the problems that DOE was encountering in documenting the expenses it incurred when using U.S. embassy services for administrative and logistical support on two of the four missions. For example, DOE did not have written procedures that specified either the types of records to be kept or the process to follow in obtaining support for foreign travel from U.S. embassies. During our review of the Secretary’s trip to India, DOE officials could not provide records to substantiate some of the costs of the mission. DOE has taken several steps to address this problem, including the development of detailed written procedures and closer cooperation from the State Department in obtaining improved documentation of overseas expenses. A DOE official said DOE hopes to resolve issues related to the embassies’ charges by the end of May 1996. DOE is still in the process of analyzing the expense reports received from overseas posts in connection with administrative and logistical support charges for the July 1994 India trade mission and the other three trade missions that we examine in today’s testimony. DOE provided us with the following status report on its efforts: The total embassy logistical and support costs charged to DOE for the four missions were about $409,674. DOE has accepted about $257,555 of these charges, has disputed or rejected about $135,119 of these charges, and continues to review about $17,000 of these charges. Some of the charges rejected by DOE include $14,170 for a double billing of a banquet that was not a DOE expense and $6,346 for aircraft fueling services not requested. Mr. Chairman, this concludes my prepared remarks. I will be happy to answer any questions you or other Members of the Subcommittee may have. Value of agreements (millions) DOE Dep. Sec. (Reverse trade missions to the United States) Value assigned by DOE (millions) Company official estimated U.S. exports at less than $5 million (Table notes on next page) Note 1: We use “finalized business agreements” to refer to agreements DOE describes as reached financial closure or sales agreements.” Note 2: DOE did not cite any finalized business agreements for the South Africa trade mission. Tables III.1-5 illustrate the total estimated costs of four DOE trade missions, from July 1994 to August 1995. Administration & logistics (provided by State Dept.) Administration & logistics (provided by State Dept.) Administration & logistics (provided by State Dept.) Administration & logistics (provided by State Dept.) Total estimated commercial fare for government passengers 15 (25.4%) 20 (30.8%) 26 (38.8%) 25 (37.3%) South Africa comparison excluded the additional costs of the charter aircraft trips to Kimberley and Capetown. South Africa - Aug. 1995 17 (77%) To complete our work, we interviewed DOE officials; company officials; U.S. Export-Import Bank (Eximbank) officials; and Department of Commerce officials, including Foreign Commercial Service officers stationed abroad. We reviewed various DOE and Commerce Department documents, including DOE trade mission trip reports, and over 17,000 pages of documentation provided by DOE to this Subcommittee. We also reviewed financing documents provided by the Treasury Department and the Eximbank, DOE press releases, and other documents relating to specific business agreements and companies. At the request of this Subcommittee, we focused on the 14 business agreements that DOE characterized as having reached “financial closure or sales agreement.” We did not review the other business agreements that were characterized as potential agreements by DOE. We contacted the 13 companies associated with these agreements to obtain additional information about the nature and extent of DOE’s assistance. In two cases, we were not able to obtain a company response to our questions. We relied on the businesses involved to provide estimates of the U.S. export value and the size of their agreements. We did not verify the value of the estimates provided nor did we examine the actual contracts associated with the business agreements. In regard to the costs of the trips, we relied upon information provided by DOE’s Office of the Chief Financial Officer. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed four trade missions sponsored by the Department of Energy (DOE), focusing on: (1) DOE authority and role in these missions; (2) the results of the missions; and (3) management weaknesses inherent in DOE-sponsored trade missions. GAO noted that: (1) the Secretary of Energy has explicit statutory authority to undertake export promotion activities; (2) in 1995, DOE funding for export promotion totalled $14 million; (3) DOE performed advocacy on behalf of U.S. energy companies seeking to capture some of the emerging energy markets in China, India, and Pakistan; (4) it is difficult to measure the impact of these federal advocacy activities because sales forecasts are unclear, of the numerous participants involved, and of problems in calculating the value of sales agreements and maintenance contracts; (5) the four trade missions resulted in $19.7 billion in potential and finalized fuel supply and power purchase agreements and oil and gas exploration agreements; (6) DOE subsequently reported that finalized agreements totalled $2.03 billion, but export data show that the value of these agreements seem to be overstated by over 50 percent; (7) most companies participating in DOE trade missions support DOE efforts, but a few said that they could complete their business agreements without DOE involvement; (8) the planning for these missions is complicated by time constraints, last minute changes in plans, and lack of familiarity with conducting large, overseas trade missions; and (9) DOE has introduced new procedures to correct DOE management weaknesses, but they have not been fully tested in practice.
In 2007, almost 13 million citizens from 27 countries entered the United States under the Visa Waiver Program. The program was created to promote the effective use of government resources and facilitate international travel without jeopardizing U.S. national security. The United States last expanded the Visa Waiver Program’s membership in 1999; since then, other countries have expressed a desire to become members. In February 2005, President Bush announced that DHS and State would develop a strategy, or “Road Map Initiative,” to clarify the statutory requirements for designation as a participating country. According to DHS, some of the countries seeking admission to the program are U.S. partners in the war in Iraq and have high expectations that they will join the program due to their close economic, political, and military ties to the United States. As we reported in July 2006, DHS and State are consulting with 13 “Road Map” countries seeking admission into the Visa Waiver Program—Bulgaria, Cyprus, Czech Republic, Estonia, Greece, Hungary, Latvia, Lithuania, Malta, Poland, Romania, Slovakia, and South Korea. Following the terrorist attacks of September 11, 2001, Congress passed additional laws to strengthen border security policies and procedures, and DHS and State instituted other policy changes that have affected a country’s qualifications for participating in the Visa Waiver Program. In August 2007, Congress enacted the 9/11 Act, which provides DHS with the authority to consider admitting into the Visa Waiver Program countries that otherwise meet the program requirements, but have refusal rates between 3 percent and 10 percent, provided the countries meet certain conditions (see app. II for worldwide refusal rates for fiscal year 2007). Before being admitted to the program, for example, the countries must demonstrate a sustained reduction in refusal rates, and must be cooperating with the United States on counterterrorism initiatives, information sharing, and the prevention of terrorist travel, among other things. In addition, DHS must complete two actions aimed at enhancing the security of the program (see app. III for the key legislative requirements for inclusion in the Visa Waiver Program). In particular, to consider admitting countries into the Visa Waiver Program with refusal rates between 3 percent and 10 percent, DHS must certify the following to Congress: A system is in place that can verify the departure of not less than 97 percent of foreign nationals who depart through U.S. airports. Initially, this system will be biographic only. Congress required the eventual implementation of a biometric exit system at U.S. airports. If the biometric air exit system is not in place by July 1, 2009, the flexibility that DHS may obtain to consider admitting countries with refusal rates between 3 percent and 10 percent will be suspended until the system is in place. An electronic travel authorization system is “fully operational.” This system will require nationals from Visa Waiver Program countries to provide the United States with biographical information before boarding a U.S.-bound flight to determine the eligibility of, and whether there exists a law enforcement or security risk in permitting, the foreign national to travel to the United States under the program. DHS recommends that applicants obtain ESTA authorizations at the time of reservation or ticket purchase, or at least 72 hours before their planned date of departure for the United States. The ESTA application will electronically collect information similar to the information collected in paper form by CBP.To the extent possible, according to DHS, applicants will find out almost immediately whether their travel has been authorized, in which case they are free to travel to the United States, or if their application has been rejected, in which case they are ineligible to travel to the United States under the Visa Waiver Program. Those found ineligible to travel under the Visa Waiver Program must apply for a visa at a U.S. embassy to travel to the United States. In addition, the 9/11 Act requires that visa waiver countries enter into an agreement with the United States to report, or make available through Interpol or other means as designated by the Secretary of Homeland Security, to the U.S. government information about the theft or loss of passports within a strict time frame; enter into an agreement with the United States to share information regarding whether citizens and nationals of that country traveling to the United States represent a threat to U.S. security; and accept for repatriation any citizen, former citizen, or national of the country against whom the United States has issued a final order of removal. When DHS exercises its authority to waive the 3 percent refusal rate requirement, it shall, in consultation with State, take into account other discretionary factors, pursuant to the 9/11 Act, including a country’s airport security standards; whether the country assists in the operation of an effective air marshal program; the standards of passports and travel documents issued by the country; and other security-related factors, including the country’s cooperation with (1) the United States’ initiatives toward combating terrorism and (2) the U.S. intelligence community in sharing information regarding terrorist threats. DHS works in consultation with State and Justice, as well as the intelligence community, as part of DHS’s assessment of countries seeking to join the Visa Waiver Program. The executive branch is moving aggressively to expand the Visa Waiver Program by the end of 2008, but, in doing so, DHS has not followed a transparent process for admitting new countries to the program—an approach that has created confusion among other U.S. agencies in Washington, D.C.; U.S. embassy officials overseas; and those countries that are seeking to join the Visa Waiver Program. During the expansion negotiations, DHS has achieved some security enhancements, such as new agreements that, among other things, require the reporting of lost and stolen blank and issued passports. We found that the Visa Waiver Program Office has not followed its own standard operating procedures, completed in November 2007, which set forth the key milestones that DHS and aspiring countries must meet before additional countries are admitted into the program. According to the standard procedures, State should submit to DHS a formal, written nomination for a particular country, after which DHS is to lead an interagency team to conduct an in-country, comprehensive review of the impact of the country’s admission into the Visa Waiver Program on U.S. security, law enforcement, and immigration interests. Figure 1 depicts the standard procedures that the program office established to guide expansion of the Visa Waiver Program compared with DHS’s actions since August 2007. Although State has only nominated one country—Greece— DHS has nonetheless conducted security reviews for countries that State has not yet nominated—Czech Republic, Estonia, Hungary, Latvia, Lithuania, Slovakia, and South Korea. According to State officials, until DHS has implemented the required provisions of the 9/11 Act, and aspiring countries have met all of the Visa Waiver Program’s statutory requirements, State does not plan to nominate any other countries. DHS’s Assistant Secretary for Policy Development told us that the department had determined that it would not follow the standard operating procedures during these expansion negotiations and, thus, had to “make up the process as it went along,” in part because DHS had never expanded the program before and because Congress significantly changed the program’s legislative requirements in August 2007. Nov. Dec. Jn. Fe. Mr. Apr. Ag. Sept. Oct. Nov. Dec. State and Justice officials told us that the lack of a transparent timeline and requirements for Visa Waiver Program expansion has led to confusion among U.S. agencies in headquarters’ offices and at U.S. embassies overseas, as well as foreign governments seeking to join the program. For example, DHS’s standard procedures were not updated to account for the department’s plans to sign with each of the aspiring Visa Waiver Program countries separate memorandums of understanding (MOU) that lay out the new legislative requirements from the 9/11 Act. According to DHS, while not required by the act, the U.S. government is seeking to negotiate MOUs with current and aspiring Visa Waiver Program countries to help put the legislative provisions in place. Although DHS has not yet signed MOUs with any current program countries, the department intends to complete negotiations with existing program countries by October 2009. As indicated in figure 1, DHS signed MOUs with aspiring countries before conducting in-country security reviews. The MOUs are to be accompanied by more specific “implementing arrangements” for sharing biographic, biometric, and other data, as required by the 9/11 Act, within general parameters of what the United States is willing and able to reciprocate— this includes sharing information on known or suspected terrorists. According to DHS, the type and scope of these arrangements will vary by country and will take into account existing bilateral information-sharing arrangements. As of June 2008, DHS had signed MOUs with eight Road Map countries and had begun negotiations on the implementing arrangements. However, State and Justice officials told us that DHS had not been clear in communicating these steps to aspiring and current program countries. DHS officials acknowledged that the department was still exploring how to best complete the implementing arrangements. U.S. embassy officials in several Road Map countries told us that it had been difficult to explain the expansion process to their foreign counterparts and manage their expectations about when those countries might be admitted into the Visa Waiver Program. Justice officials and U.S. officials in several embassies told us that the implementing arrangements may be more difficult to negotiate than the nonbinding MOUs because some countries have expressed concerns about sharing private information on their citizens due to strict national privacy laws—concerns that the United States also has about its citizens’ information. In response to our request, in late April 2008, DHS provided us with an outline of the department’s completed and remaining actions for expanding the Visa Waiver Program by the end of this year. DHS officials stated that this outline could be a first step in providing guidance for all stakeholders, should the program be expanded again in the future. However, the outline does not include criteria for selecting countries under consideration for admission into the program, other than the 13 Road Map countries. The U.S. government is only considering the Road Map countries for potential admission into the program in 2008 because the United States began formal discussions with these 13 countries several years ago, not due to the application of clearly defined requirements. DHS is negotiating with 4 Road Map countries with fiscal year 2007 refusal rates over 10 percent (Hungary, Latvia, Lithuania, and Slovakia), with the expectation that fiscal year 2008 refusal rates for these countries will fall below 10 percent. State officials told us that they lacked a clear rationale to explain to other aspiring, non-Road Map countries with refusal rates under 10 percent (Croatia, Israel, and Taiwan) that they will not be considered in 2008 due to the executive branch’s plans to expand the program first to South Korea and countries in Central and Eastern Europe. In addition, on May 1 of each year, State must report to Congress those countries that are under consideration for inclusion in the Visa Waiver Program; the department has never submitted this report because, according to consular officials, no country has been under consideration for admission into the program since the reporting requirement was established in 2000. As of late June 2008, State had not yet submitted its report for 2008. A State official told us that, despite the actions that DHS, State, and other U.S. agencies have taken to expand the Visa Waiver Program to as many as 9 countries in 2008, State was initially unclear about which countries it should include in this report. While only Greece has been nominated, DHS has made clear its goal to admit many of the Road Map countries in 2008. (Fig. 2 shows the fiscal year 2007 refusal rates for the 13 Road Map countries.) According to DHS, it could not wait until all statutory requirements were officially met before beginning bilateral negotiations with Road Map countries, because doing so would not allow sufficient time to add the countries by the end of 2008. DHS plans to complete the security reviews and sign MOUs and implementing arrangements with Road Map countries by the fall of 2008. If these and all other statutory provisions are completed—including countries’ achievement of refusal rates below 10 percent—State indicated that it will then formally nominate the countries. However, DHS has acknowledged that if it and the aspiring countries cannot meet all of the program’s statutory requirements, the United States will not admit additional countries into the program. In such an event, the U.S. government could face political and diplomatic repercussions, given the expectations raised that many of the Road Map countries will be admitted in 2008. DHS, State, and Justice officials acknowledged that following a more transparent process would be useful in the future as additional countries seek to join the program. DHS’s expansion negotiations with current and aspiring Visa Waiver Program countries have led to commitments from countries to improve information sharing processes with the United States. For example, by signing MOUs, eight aspiring countries have signaled their intent to comply with the program’s statutory provision to report to the United States or Interpol in a timely manner the loss or theft of passports—a key vulnerability in the Visa Waiver Program, as we have previously reported. In addition, as a result of ongoing visa waiver negotiations with the South Korean government, in January 2008, DHS initiated the Immigration Advisory Program at Incheon International Airport in South Korea to help prevent terrorists and other high-risk travelers from boarding commercial aircraft bound for the United States.Furthermore, a senior consular official testified that the executive branch’s dialogue on Visa Waiver Program expansion is helping to stimulate U.S. negotiations on other terrorist watch-list- sharing arrangements with Road Map countries. As of early September 2008, DHS had not yet met two key certification requirements in the 9/11 Act that are necessary to allow the department to consider expanding the Visa Waiver Program to countries with refusal rates between 3 percent and 10 percent. In addition, the Visa Waiver Program Office does not fully consider data on overstay rates for current and aspiring Visa Waiver Program countries, even though doing so is integral to meeting a statutory requirement for continued eligibility in the Visa Waiver Program. Finally, in reviewing recommendations from our 2006 report aimed at improving efforts to assess and mitigate program risks, we found that DHS has implemented many of our prior recommendations, but some are only partially implemented. On February 28, 2008, we testified that DHS’s plan for certifying that it can verify the departure of 97 percent of foreign nationals from U.S. airports will not help the department mitigate risks of the Visa Waiver Program. Furthermore, DHS will face a number of challenges in implementing ESTA by January 2009. Finally, it is unlikely that DHS will implement a biometric air exit system before July 2009, due to opposition from the airline industry. As we have previously mentioned, the 9/11 Act requires that DHS certify that a system is in place that can verify the departure of not less than 97 percent of foreign nationals who depart through U.S. airports. In December 2007, DHS reported to us that it will match records, reported by airlines, of visitors departing the country to the department’s existing records of any prior arrivals, immigration status changes,or prior departures from the United States. At the time of our February 2008 testimony, DHS had confirmed that it planned to employ a methodology that begins with departure records. During the hearing, we also testified that this methodology will not demonstrate improvements in the air exit system and will not help the department mitigate risks of the Visa Waiver Program. We identified a number of weaknesses with this approach, as follows: First, DHS’s methodology will not inform overall or country-specific overstay rates, which are key factors in determining illegal immigration risks in the Visa Waiver Program. In particular, DHS’s methodology does not begin with arrival records to determine if those foreign nationals departed or remained in the United States beyond their authorized periods of admission—useful data for oversight of the Visa Waiver Program and its expansion. As we previously testified, an alternate approach would be to track air arrivals from a given point in time and determine whether those foreign nationals have potentially overstayed. Figure 3 compares DHS’s plan to match visitor records using departure data as a starting point with a methodology that would use arrival data as a starting point. Second, for purposes of this provision and Visa Waiver Program expansion, we do not see the value in verifying that a foreign national leaving the United States had also departed at a prior point in time—in other words, matching a new departure record back to a previous departure record from the country. DHS’s Assistant Secretary for Policy Development told us in January 2008 that the department chose to include previous departures and changes of immigration status records because this method allowed the department to achieve a match rate of 97 percent or greater. Third, DHS’s methodology does not address the accuracy of airlines’ transmissions of departure records, and DHS acknowledges that there are weaknesses in the departure data. Foreign nationals who enter the United States by air are inspected by DHS officers—a process that provides information that can be used to verify arrival manifest data—and, since 2004, DHS has implemented the US-VISIT program to collect biometric information on foreign nationals arriving in the United States.However, the department has not completed the exit portion of this tracking system; thus, there is no corresponding check on the accuracy and completeness of the departure manifest information supplied by the airlines.According to DHS, it works with air carriers to try to improve both the timeliness and comprehensiveness of manifest records, and fines carriers that provide incomplete or inaccurate information. If DHS could evaluate these data, and validate the extent to which they are accurate and complete, the department would be able to identify problems and work with the airlines to further improve the data. An air exit system that facilitates the development of overstay rate data is important to managing potential risks in expanding the Visa Waiver Program. We found that DHS’s planned methodology for meeting the “97 percent provision” so it can move forward with program expansion will not demonstrate improvements in the air exit system or help the department identify overstays or develop overstay rates. As of early September 2008, DHS had not yet certified this provision, nor had it finalized a methodology to meet the provision. In June 2008, DHS announced in the Federal Register that it anticipates that all visa waiver travelers will be required to obtain ESTA authorization for visa waiver travel to the United States after January 12, 2009. However, we identified four potential challenges that DHS may face in implementing ESTA, including a limited time frame to adequately inform U.S. embassies and the public and the significant impact that ESTA will have on the airline and travel industry. We have previously reported that visa waiver travelers pose inherent security and illegal immigration risks to the United States, since they (1) are not subject to the same degree of screening as travelers with visas and (2) are not interviewed by a consular officer before arriving at a U.S. port of entry. In the 9/11 Act conference report,Congress agreed on the need for significant security enhancements to the Visa Waiver Program and to the implementation of ESTA prior to permitting DHS to admit new countries into the program with refusal rates between 3 percent and 10 percent. According to DHS, ESTA will allow DHS to identify potential ineligible visa waiver travelers before they embark on a U.S.-bound carrier. DHS also stated that by recommending that travelers submit ESTA applications 72 hours in advance of their departure, CBP will have additional time to screen visa waiver travelers destined for the United States. DHS must follow several steps in implementing ESTA (see fig. 4). First, the 9/11 Act requires that DHS must certify both the 97 percent air exit system and ESTA as fully operational before the department can consider expanding the Visa Waiver Program to countries with refusal rates between 3 percent and 10 percent. DHS has not announced when it plans to make this certification. DHS attorneys told us that the department could admit additional countries to the program once it provides this certification. In addition, according to DHS, the act provides that 60 days after the Secretary of Homeland Security publishes a final notice in the Federal Register of the ESTA requirement, each alien traveling under the Visa Waiver Program must use ESTA to electronically provide DHS with biographic and other such information as DHS deems necessary to determine, in advance of travel, the eligibility of, and whether there exists a law enforcement or security risk in permitting, the alien to travel to the United States. DHS stated that it expects to issue this final notice in early November 2008, and, as of January 12, 2009, all visa waiver travelers would be required to obtain authorization through ESTA prior to boarding a U.S.- bound flight or cruise vessel. DHS stated that if, after certifying ESTA as fully operational, it admits an additional country prior to January 12, 2009, it will require that visa waiver travelers from that country obtain ESTA authorizations immediately. For example, if Estonia were admitted into the Visa Waiver Program on October 10, 2008, citizens of that country traveling to the United States under the program would be required to begin using ESTA on that date; however, visa waiver travelers from existing program countries would not be required to obtain approval through ESTA until January 12, 2009, more than 3 months later. Aug. Sept. Oct. Nov. Dec. Jan. We identified four potential challenges to DHS’s planned implementation of ESTA by January 12, 2009. It is difficult to predict the extent to which DHS will address these challenges due to the short time frame in which the department is implementing the system. These challenges include the following: DHS has a limited time frame to adequately inform U.S. embassies in Visa Waiver Program countries and the public about ESTA. U.S. embassy officials in current and aspiring Visa Waiver Program countries told us that the United States will need to ensure that there is sufficient time to inform travelers, airlines, and the travel industry of ESTA requirements and implementation timelines. U.S. commercial and consular officials at a U.S. embassy in a current Visa Waiver Program country told us that they would ideally like 1 year’s advance notice before ESTA is implemented to allow sufficient time to inform and train the public and the travel industry of the new requirement. However, DHS’s announcement in June 2008 accelerated the timeline for ESTA implementation in current visa waiver countries. During our site visits in March 2008, U.S. embassy officials in a visa waiver country told us that they had been informed by DHS officials that the department did not plan to require ESTA authorization for travelers from that country until the summer of 2009 or later. According to a senior U.S. official at one embassy, DHS had confirmed this plan with host country government officials in early May 2008. Following the June 2008 announcement, a senior U.S. embassy official in another country told us that DHS did not give the embassy adequate advance notice—to prepare translated materials, brief journalists from the major media, prepare the embassy Web site, or set up a meeting with travel and tourism professionals to discuss the implications of ESTA requirements—before publishing the interim final rule. DHS officials told us that the department is currently working on an outreach strategy to ensure that travelers are aware of the ESTA requirement. Impact on air and sea carriers could be significant. DHS estimates that 8 U.S.-based air carriers and 11 sea carriers, as well as 35 foreign-based air carriers and 5 sea carriers, will be affected by ESTA requirements for visa waiver travelers. In addition, DHS stated that it did not know how many passengers annually would request that their carrier apply for ESTA authorization on their behalf to travel under the Visa Waiver Program or how much it will cost carriers to modify their existing systems to accommodate such requests. Thus, in the short term, DHS expects that the carriers could face a notable burden if most of their non-U.S. passengers request that their carriers submit ESTA applications. On the basis of DHS’s analysis, ESTA could cost the carriers about $137 million to $1.1 billion over the next 10 years, depending on how the carriers decide to assist the passengers. DHS has noted that these costs to carriers are not compulsory because the carriers are not required to apply for an ESTA authorization on behalf of their visa waiver travelers. DHS is developing a separate system, independent from ESTA, which will enable the travel industry to voluntarily submit an ESTA application on behalf of a potential Visa Waiver Program traveler. As of early August 2008, DHS had analyzed the role that transportation carriers could play in applying for and submitting ESTA applications on behalf of their customers when they arrive at an air or sea port. However, CBP stated that there had been no further development on this issue. ESTA could increase consular workload. In May 2008, we reported that State officials and officials at U.S. embassies in current Visa Waiver Program countries are concerned with how ESTA implementation will affect consular workload.Consular officers are concerned that more travelers will apply for visas at consular posts if their ESTA applications are rejected or because they may choose to apply for a visa that has a longer validity period (10 years) than an ESTA authorization. We reported that if 1 percent to 3 percent of current Visa Waiver Program travelers came to U.S. embassies for visas, it could greatly increase visa demand at some locations, which could significantly disrupt visa operations and possibly overwhelm current staffing and facilities.DHS officials told us that the department is aware of concerns regarding rejection rates and has been working with State to create a system that mitigates these concerns. Developing a user-friendly ESTA could be difficult. According to DHS, the ESTA Web site will initially be operational in English; additional languages will be available by October 15, 2008. Even when the Web site is operational in additional languages, ESTA will only allow travelers to fill out the application in English, as with CBP’s paper-based form. In addition, during our site visits, embassy officials expressed concerns that some Visa Waiver Program travelers do not have Internet access and, thus, will face difficulties in submitting their information to ESTA.Implementing a user-friendly ESTA is essential, especially for those travelers who do not have Internet access or are not familiar with submitting forms online. A third provision of the 9/11 Act requires that DHS implement a biometric air exit system before July 1, 2009, or else the department’s authority to waive the 3 percent refusal rate requirement—and thereby consider admitting countries with refusal rates between 3 percent and 10 percent— will be suspended until this system is in place. In March 2008, DHS testified that US-VISIT will begin deploying biometric exit procedures in fiscal year 2009. DHS released a proposed rule for the biometric exit system in April 2008, and the department plans to issue a final rule before the end of 2008. According to the proposed rule, air and sea carriers are to collect, store, and transmit to DHS travelers’ biometrics. During the public comment period on the proposed rule, airlines, Members of Congress, and other stakeholders have raised concerns about DHS’s proposal, and resolving these concerns could take considerable time. For example, the airline industry strongly opposes DHS’s plans to require airline personnel to collect digital fingerprints of travelers departing the United States because it believes it is a public sector function. We have issued a series of reports on the US-VISIT program indicating that there is no clear schedule for implementation of the exit portion of the system, and that DHS will encounter difficulties in implementing the system by July 2009. Although DHS program officials stated that DHS is on track to implement the biometric exit system by July 2009, it is unlikely that DHS will meet this timeline. We are currently reviewing DHS’s proposed rule and plan to report later this year on our findings. Some DHS components have expanded efforts to identify citizens who enter the United States under the Visa Waiver Program and then overstay their authorized period of admission. In 2004, US-VISIT established the Data Integrity Group, which develops data on potential overstays by comparing foreign nationals’ arrival records with departure records from U.S. airports and sea ports. US-VISIT provides data on potential overstays to ICE, CBP, and U.S. Citizenship and Immigration Services, as well as to State’s consular officers to aid in visa adjudication. For example, US-VISIT sends regular reports to ICE’s Compliance Enforcement Unit on potential overstays, and ICE officials told us they use these data regularly during investigations. In fiscal year 2007, ICE’s Compliance Enforcement Unit received more than 12,300 overstay leads from the Data Integrity Group.As an example of one of these leads, on November 27, 2007, ICE agents in Ventura, California, arrested and processed for removal from the United States an Irish citizen whose term of admission expired in September 2006. On the basis of concerns that Visa Waiver Program travelers could be overstaying, ICE has requested that US-VISIT place additional emphasis on identifying potential overstays from program countries. In turn, ICE has received funding to establish a Visa Waiver Enforcement Program within the Compliance Enforcement Unit to investigate the additional leads from US-VISIT. As part of this funding, ICE plans to hire 46 additional employees to help the unit increase its focus on identifying individuals who traveled to the United States under the Visa Waiver Program and potentially overstayed. However, DHS is not fully monitoring compliance with a legislative provision that requires a disqualification rate (this calculation includes overstays) of less than 3.5 percent for a country to participate in the Visa Waiver Program.Monitoring these data is a long-standing statutory requirement for the program. We have testified that the inability of the U.S. government to track the status of visitors in the country, identify those who overstay their authorized period of visit, and use these data to compute overstay rates has been a long-standing weakness in the oversight of the Visa Waiver Program.DHS’s Visa Waiver Program Office reported that it does not monitor country overstay rates as part of its mandated, biennial assessment process for current visa waiver countries because of weaknesses in US-VISIT’s data. Since 2004, however, the Data Integrity Group has worked to improve the accuracy of US-VISIT’s overstay data and can undertake additional analyses to further validate these data. For example, using available resources, the group conducts analyses, by hand, of computer-generated overstay records to determine whether individuals identified as overstays by the computer matches are indeed overstays. In addition, US-VISIT analysts can search up to 12 additional law enforcement and immigration databases to verify whether a potential overstay may, in fact, be in the country illegally. While it receives periodic reporting on potential overstays from US-VISIT, the Visa Waiver Program Office has not requested that the Data Integrity Group provide validated overstay rate estimates from visa waiver or Road Map countries since 2005. Although DHS has not designated an office with the responsibility of developing such data for the purposes of the Visa Waiver Program, US-VISIT officials told us that, with the appropriate resources, they could provide more reliable overstay data and estimated rates, by country, to the Visa Waiver Program Office, with support from other DHS components, such as the Office of Immigration Statistics. For example, the Visa Waiver Program Office could request additional analysis for countries where the preliminary, computer-generated overstay rates raised concerns about illegal immigration risks in the program. These resulting estimates would be substantially more accurate than the computer-generated overstay rates. However, the resulting estimates would not include data on departures at land ports of entry. In addition, as we have previously mentioned, airline departure data have weaknesses.DHS has asserted that overstay data will continue to improve with the implementation of the biometric US-VISIT exit program. In addition to US-VISIT, State’s overseas consular sections develop data on overstay rates that might be useful for assessing potential illegal immigration risks of the Visa Waiver Program. Specifically, some consular sections have conducted validation studies to determine what percentage of visa holders travel to the United States and potentially overstay. For example, at the U.S. embassy in Estonia, consular officials conducted a validation study in the summer of 2006 that concluded that 2.0 percent to 2.7 percent of Estonian visa holders traveling to the United States in 2005 had potentially overstayed. US-VISIT overstay data, after appropriate analysis and in conjunction with other available data, such as validation studies, would provide DHS with key information to help evaluate the illegal immigration risks of maintaining a country’s membership or admitting additional countries into the Visa Waiver Program. In July 2006, we reported that the process for assessing and mitigating risks in the Visa Waiver Program had weaknesses, and that DHS was not equipped with sufficient resources to effectively monitor the program’s risks.For example, at the time of our report, DHS had only two full-time staff charged with monitoring countries’ compliance with the program’s requirements and working with countries seeking to join the program. We identified several problems with the process by which DHS was monitoring countries’ adherence to the program requirements, including a lack of consultation with key interagency stakeholders. In addition, we reported that DHS needed to improve its communication with officials at U.S. embassies so it could communicate directly with officials best positioned to monitor compliance with the program’s requirements, and report on current events and issues of potential concern in each of the participating countries. Also, at the time of our 2006 report, the law required the timely reporting of passport thefts for continued participation in the Visa Waiver Program, but DHS had not established or communicated these time frames and operating procedures to participating countries. In addition, DHS had not yet issued guidance on what information must be shared, with whom, and within what time frame. To address these weaknesses, we recommended that DHS take a number of actions to better assess and mitigate risks in the Visa Waiver Program. As we note in table 1, DHS has taken actions to implement some of our recommendations, but still needs to fully implement others. In particular, DHS has provided the Visa Waiver Program Office with additional resources since our 2006 report. As of April 2008, the office had five additional full-time employees, and two other staff from the Office of Policy that devote at least 50 percent of their time to Visa Waiver Program tasks. In addition, staff from several other DHS components assists the office on a regular basis, as well as during the in-country security assessments for Road Map and current program countries. In response to our recommendation to finalize clear, consistent, and transparent protocols for the biennial country assessment, the Visa Waiver Program Office drafted standard operating procedures in November 2007 for conducting reviews of nominated and participating visa waiver countries. In addition, DHS now provides relevant stakeholders with copies of the most current mandated, biennial country assessments; during our visits in early 2008, U.S. embassy officials confirmed that the assessments are now accessible. Furthermore, regarding our recommendation to develop and communicate clear, standard operating procedures for the reporting of lost and stolen blank and issued passports, DHS established criteria for the reporting of lost and stolen passport data—including a definition of “timely reporting” and an explanation of to whom in the U.S. government countries should report—as part of the MOUs it is negotiating with participating and Road Map countries. Furthermore, DHS, in coordination with the U.S. National Central Bureau, has initiated a system that allows DHS to screen foreign nationals arriving at all U.S. international airports against Interpol’s database of lost and stolen travel documents before the foreign nationals arrive in the country. Results to date indicate that the system identifies two to three instances of fraudulent passports per month. According to the National Central Bureau, Interpol’s database has intercepted passports that were not identified by DHS’s other screening systems. For example, on February 18, 2008, the Interpol database identified a Nigerian national traveling on a counterfeited British passport who attempted to enter the United States at Newark International Airport. Upon arrival, the individual was referred to secondary inspection and determined to be inadmissible to the United States. While DHS has taken action on many of our recommendations, it has not fully implemented others. We recommended that DHS require that all Visa Waiver Program countries provide the United States and Interpol with nonbiographical data from lost or stolen blank and issued passports. According to DHS, all current and aspiring visa waiver countries report lost and stolen passport information to Interpol, and many report such information to the United States. The 9/11 Act requires agreements between the United States and Visa Waiver Program countries on the reporting of lost and stolen passports within strict time limits; however, none of the current visa waiver countries have yet to formally establish lost and stolen passport reporting agreements by signing MOUs with DHS. DHS also still needs to fully implement our recommendations to create real-time monitoring arrangements, establish protocols for direct communication with contacts at overseas posts, and require periodic updates from these contacts. For example, while the Visa Waiver Program Office has recently begun communicating and disseminating relevant program information regularly with U.S. embassy points of contact at Visa Waiver Program posts, officials at some of the posts we visited in early 2008 reported that they had little contact with the office and were not regularly informed of security concerns or developments surrounding the program. The executive branch is moving aggressively to expand the Visa Waiver Program in 2008 to allies in Central and Eastern Europe and South Korea, after the countries have met certain requirements and DHS has completed and certified key security requirements in the 9/11 Act. However, DHS has not followed a transparent process for expanding the program, thereby causing confusion among other U.S. agencies and embassies overseas. The lack of a clear process could bring about political repercussions if countries are not admitted to the program in 2008, as expected. In addition, DHS is not fully assessing a critical illegal immigration risk of the Visa Waiver Program and its expansion since it does not consider overstay data in its security assessments of current and aspiring countries. DHS should determine what additional data and refinements of that data are necessary to ensure that it can assess and mitigate this potential risk to the United States. Finally, DHS still needs to take actions to fully implement our prior recommendations in light of plans to expand the program. To improve management of the Visa Waiver Program and better assess and mitigate risks associated with it, we are recommending that the Secretary of Homeland Security take the following four actions: establish a clear process, in coordination with the Departments of State and Justice, for program expansion that would include the criteria used to determine which countries will be considered for expansion and timelines for nominating countries, security assessments of aspiring countries, and negotiation of any bilateral agreements to implement the program’s legislative requirements; designate an office with responsibility for developing overstay rate information for the purposes of monitoring countries’ compliance with the statutory requirements of the Visa Waiver Program; direct that established office and other appropriate DHS components to explore cost-effective actions necessary to further improve, validate, and test the reliability of overstay data; and direct the Visa Waiver Program Office to request an updated, validated study of estimated overstay rates for current and aspiring Visa Waiver Program countries, and determine the extent to which additional research and validation of these data are required to help evaluate whether particular countries pose a potential illegal immigration risk to the United States. We provided a draft of this report to DHS, State, and Justice for review and comment. DHS provided written comments, which are reproduced in appendix IV, and technical comments, which we incorporated into the report, as appropriate. Justice also provided written comments, which are reprinted in appendix V. State did not provide comments on the draft report. DHS either agreed with, or stated that it was taking steps to implement, all of our recommendations. For example, DHS indicated that it is working with State to create procedures so that future Visa Waiver Program candidate countries are selected and designated in as transparent and uniform a manner as possible. In addition, DHS noted that it is taking steps to improve the accuracy and reliability of the department’s overstay data. DHS also provided additional details about its continued outreach efforts to the department’s interagency partners and foreign counterparts on the expansion process for the Visa Waiver Program. Justice did not comment on our recommendations, but provided additional information about the importance of monitoring countries’ reporting of lost and stolen passport data to Interpol. In addition, Justice discussed its efforts, in collaboration with DHS, to include screening against Interpol’s lost and stolen passport database as part of ESTA. Justice noted that use of Interpol’s database continues to demonstrate significant results in preventing the misuse of passports to fraudulently enter the United States. We are sending copies of this report to interested congressional committees, the Secretaries of Homeland Security and State, and the U.S. Attorney General. Copies of this report will be made available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Jess T. Ford, Director, International Affairs and Trade, at (202) 512-4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To describe the process that the Department of Homeland Security (DHS) is following to admit countries into the Visa Waiver Program, we reviewed laws governing the program and its expansion, and relevant regulations and agency operating procedures, as well as our prior reports and testimonies. In particular, we reviewed DHS’s standard operating procedures for oversight and expansion of the Visa Waiver Program. We spoke with officials from the Visa Waiver Program Office, which is responsible for oversight of Visa Waiver Program requirements, as well as representatives from the Department of State’s (State) Consular Affairs, Europe and Eurasia, and East Asia and Pacific Bureaus. In addition, we visited U.S. embassies in three current visa waiver countries—France, Japan, and the United Kingdom—whose nationals comprise a large percentage annually of visa waiver travelers to the United States. We also visited U.S. embassies in four countries—Czech Republic, Estonia, Greece, and South Korea—with which DHS is negotiating visa waiver status. During these visits, we interviewed political, economic, consular, commercial, and law enforcement officials regarding oversight of the Visa Waiver Program and its expansion. We also conducted telephone interviews with consular officials in four additional countries—Hungary, Latvia, Lithuania, and Slovakia—that DHS also aims to admit into the Visa Waiver Program in 2008. We did not interview officials in Bulgaria, Poland, or Romania because DHS told us that it does not anticipate that these countries will be admitted into the program in 2008. We did not interview officials in Malta because of the country’s relatively small number of annual Visa Waiver Program travelers to the United States. To assess actions taken to mitigate potential risks in the Visa Waiver Program, we focused on DHS’s efforts to implement the new security enhancements required by the 9/11 Act, as well as the recommendations from our July 2006 report. First, to review the department’s plans for air exit system implementation, we collected and analyzed documentation and interviewed officials from DHS’s Office of Policy, Customs and Border Protection (CBP), and the U.S. Visitor and Immigrant Status and Indicator Technology (US-VISIT) Program Office. We also reviewed prior GAO reports on immigrant and visitor entry and exit tracking systems. Second, to analyze plans for the implementation of the Electronic System for Travel Authorization (ESTA), we collected and analyzed documentation and interviewed officials from DHS’s Offices of Policy, Screening Coordination, and General Counsel, as well as CBP officials who are implementing the Web-based program. In addition, to understand DHS’s legal position regarding the statutory requirements for ESTA implementation, on May 5, 2008, we requested, in writing, DHS’s legal position on certain ESTA statutory requirements, which the department provided to us on June 6, 2008. Third, regarding DHS’s efforts to monitor citizens who enter the United States under the Visa Waiver Program and then overstay their authorized period of admission (referred to as “overstays”), we assessed the reliability of the US-VISIT data on potential overstays, which are based on air and sea carriers’ arrival and departure data. We reviewed documentation and interviewed cognizant U.S. VISIT officials about how data on potential overstays are generated and validated. As we have previously mentioned, we determined that data on potential overstays that are generated automatically by US-VISIT’s systems have major limitations; however, many of these limitations could be overcome by a series of manual checks and validations that US-VISIT can perform, upon request. Fourth, to determine the status of our prior recommendations to DHS on oversight of the Visa Waiver Program, we developed a scale to classify them as (1) implemented, (2) partially implemented, or (3) not implemented. We collected and analyzed documentation and interviewed officials from DHS’s Visa Waiver Program Office on the actions that office has taken since July 2006 to respond to our recommendations. In addition, we met with International Criminal Police Organization (Interpol) officials in Lyon, France, as well as officials from the Department of Justice’s Interpol-U.S. National Central Bureau to discuss the status of DHS’s access to Interpol’s database of lost and stolen travel documents. We concluded that a recommendation was (1) “implemented,” if the evidence indicated that DHS had taken a series of actions addressing the recommendation; (2) “partially implemented,” if the evidence indicated that DHS had taken some action toward implementation; and (3) “not implemented,” if the evidence indicated that DHS had not taken any action. We conducted this performance audit from September 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Visa Waiver Program nonimmigrant visa refusal rate is based on the number of visitor visa applications submitted, worldwide, by nationals of that country. Visitor visas are issued for short-term business or pleasure travel to the United States. The adjusted refusal rate is calculated by first subtracting from the number of visas that were initially refused (referred to as “refusals”), the number of visas that were subsequently issued after further administrative consideration (referred to as “overcomes”)—or, in short, refusals minus overcomes (see table 2). This resulting number is then divided by the number of visa issuances plus refusals minus overcomes—that is, refusals minus overcomes divided by issuances plus refusals minus overcomes. Adjusted visa refusal rates for nationals of Visa Waiver Program countries reflect only visa applications submitted at U.S. embassies and consulates abroad. These rates do not take into account persons who, under the Visa Waiver Program, travel to the United States without visas. Visa Waiver Program country refusal rates, therefore, tend to be higher than they would be if the Visa Waiver Program travelers were included in the calculation, since such travelers in all likelihood would have been issued visas had they applied, according to State. We are presenting these data to show that the countries under consideration for Visa Waiver Program admission do not all have refusal rates of less than 10 percent; we did not assess the reliability of these data. The Immigration Reform and Control Act of 1986 created the Visa Waiver Program as a pilot program.In 2000, the program became permanent under the Visa Waiver Permanent Program Act.In 2002, we reported on the legislative requirements to which countries must adhere before they are eligible for inclusion in the Visa Waiver Program. In general, the requirements are as follows: A low nonimmigrant visa refusal rate. To qualify for visa waiver status, a country must maintain a refusal rate of less than 3 percent for its citizens who apply for business and tourism visas. If DHS certifies that it has met certain requirements under the 9/11 Act, it will have the authority to waive the 3 percent refusal rate requirement—currently up to a maximum of 10 percent—provided that the country meets other security requirements. A machine-readable passport program. The country must certify that it issues machine-readable passports to its citizens. As of June 26, 2005, all travelers are required to have a machine-readable passport to enter the United States under this program. Reciprocity. The country must offer visa-free travel for U.S. citizens. Persons entering the United States under the Visa Waiver Program must have a valid passport issued by the participating country and be a national be seeking entry for 90 days or less as a temporary visitor for business or have been determined by CBP at the U.S. port of entry to represent no threat to the welfare, health, safety, or security of the United States; have complied with conditions of any previous admission under the program (e.g., individuals must have stayed in the United States for 90 days or less during prior visa waiver visits); if entering by air or sea, possess a round-trip transportation ticket issued by a carrier that has signed an agreement with the U.S. government to participate in the program, and must have arrived in the United States aboard such a carrier; and if entering by land, have proof of financial solvency and a domicile abroad to which they intend to return. Following are GAO’s comments on the Department of Homeland Security’s letter dated August 27, 2008. 1. We disagree that the department followed a transparent process for expansion of the program. As we state in our report, State and Justice officials told us that the lack of a transparent timeline and requirements for Visa Waiver Program expansion has led to confusion among U.S. agencies in headquarters’ offices and at U.S. embassies overseas, as well as foreign governments seeking to join the program. Moreover, absent clear direction from DHS, U.S. embassy officials in several aspiring countries told us that it had been difficult to explain the expansion process to their foreign counterparts and manage their expectations about when those countries might be admitted into the Visa Waiver Program. Therefore, we recommend in this report that DHS establish a clear process, in coordination with State and Justice, for program expansion. DHS noted that it is currently working to create procedures so that future candidate countries are selected and designated in as transparent and uniform a manner as possible and expectations are appropriately managed during the process. 2. Aside from the 13 Road Map countries identified in 2005, State officials told us that they lacked a clear rationale to explain to other aspiring, non-Road Map countries with refusal rates under 10 percent (Croatia, Israel, and Taiwan) that they will not be considered in 2008 due to the executive branch’s plans to expand the program first to South Korea and countries in Central and Eastern Europe. DHS noted that it is currently working to create procedures so that future candidate countries are selected and designated in as transparent and uniform a manner as possible and expectations are appropriately managed during the process. 3. We have updated the report to indicate that ESTA began accepting voluntary applications from visa waiver travelers on August 1, 2008. However, DHS does not anticipate that ESTA authorizations will be mandatory for visa waiver travelers until after January 12, 2009. As we state in our report, and as DHS noted, the department has not yet certified that it can verify the departure of not less than 97 percent of foreign nationals exiting U.S. airports, or that an Electronic System for Travel Authorization (ESTA) for screening visa waiver travelers in advance of their travel is “fully operational.” Moreover, DHS has not yet implemented a biometric air exit system at U.S. airports. Thus, DHS has not yet fully developed the tools to assess and mitigate risks in the Visa Waiver Program. 4. In July 2006, we reported that DHS needed to improve its communication with officials at U.S. embassies so it could communicate directly with officials best positioned to monitor compliance with the program’s requirements, and report on current events and issues of potential concern in each of the participating countries. Therefore, we recommended that DHS establish points of contact at U.S. embassies and develop protocols to ensure that the Visa Waiver Program Office receives periodic updates in countries where there are security concerns. As we note in this report, the Visa Waiver Program Office has recently begun communicating and disseminating relevant program information regularly with U.S. embassy officials at Visa Waiver Program posts. However, despite our requests during the course of this review—and again following our receipt of DHS’s formal comments on the draft of this report—the department has not provided us with sufficient documentation to demonstrate that it has established points of contact at U.S. embassies for all 27 participating countries or established protocols for communications between these contacts and the Visa Waiver Program Office. Furthermore, the department has not provided us with documentation to demonstrate that established points of contact are reporting periodically to the Visa Waiver Program Office. Therefore, we cannot conclude that these 2006 recommendations are fully implemented. 5. DHS noted that it has not yet signed memorandums of understanding (MOU) with any of the 27 current Visa Waiver Program countries. Because the MOUs will commit all signatories to report to Interpol or otherwise make available to the United States information about lost and stolen blank and issued passports, this recommendation will remain open until all MOUs are finalized. 6. To verify the departure of not less than 97 percent of foreign nationals exiting U.S. airports, DHS reported to us in December 2007 that it will match records, reported by airlines, of visitors departing the country with the department’s existing records of any prior arrivals, immigration status changes, or prior departures from the United States. In January 2008, the Assistant Secretary for Policy Development made this statement, which corroborated data that we received from US-VISIT in late October 2007. At the time of our February 2008 testimony,DHS confirmed to us that it planned to employ a methodology that begins with departure records; however, as DHS indicated in its written comments on a draft of this report, it has still not decided on a final methodology. DHS has not provided us with information on any other options that it might be considering to meet this provision. Furthermore, the department has not explained how and when it intends to validate these data. In addition to the individual named above, John Brummet, Assistant Director; Teresa Abruzzo; Kathryn Bernet; Joseph Carney; Martin de Alteriis; Etana Finkler; Eric Larson; and Mary Moutsos made key contributions to this report.
The Visa Waiver Program, which enables citizens of participating countries to travel to the United States without first obtaining a visa, has many benefits, but it also has risks. In 2006, GAO found that the Department of Homeland Security (DHS) needed to improve efforts to assess and mitigate these risks. In August 2007, Congress passed the 9/11 Act, which provides DHS with the authority to consider expanding the program to countries whose short-term business and tourism visa refusal rates were between 3 and 10 percent in the prior fiscal year. Countries must also meet certain conditions, and DHS must complete actions to enhance the program's security. GAO has examined DHS's process for expanding the Visa Waiver Program and evaluated the extent to which DHS is assessing and mitigating program risks. GAO reviewed relevant laws and procedures and interviewed agency officials in Washington, D.C., and in U.S. embassies in eight aspiring and three Visa Waiver Program countries. The executive branch is moving aggressively to expand the Visa Waiver Program by the end of 2008, but, in doing so, DHS has not followed a transparent process. DHS did not follow its own November 2007 standard operating procedures, which set forth key milestones to be met before countries are admitted into the program. As a result, Departments of State (State) and Justice and U.S. embassy officials stated that DHS created confusion among interagency partners and aspiring program countries. U.S. embassy officials in several aspiring countries told us it had been difficult to explain the expansion process to foreign counterparts and manage their expectations. State officials said it was also difficult to explain to countries with fiscal year 2007 refusal rates below 10 percent that have signaled interest in joining the program (Croatia, Israel, and Taiwan) why DHS is not negotiating with them, given that DHS is negotiating with several countries that had refusal rates above 10 percent (Hungary, Latvia, Lithuania, and Slovakia). Despite this confusion, DHS achieved some security enhancements during the expansion negotiations, including agreements with several aspiring countries on lost and stolen passport reporting. DHS, State, and Justice agreed that a more transparent process is needed to guide future program expansion. DHS has not fully developed tools to assess and mitigate risks in the Visa Waiver Program. To designate new program countries with refusal rates between 3 and 10 percent, DHS must first make two certifications. First, DHS must certify that it can verify the departure of not less than 97 percent of foreign nationals who exit from U.S. airports. In February 2008, we testified that DHS's plan to meet this provision will not help mitigate program risks because it does not account for data on those who remain in the country beyond their authorized period of stay (overstays). DHS has not yet finalized its methodology for meeting this provision. Second, DHS must certify that the Electronic System for Travel Authorization (ESTA) for screening visa waiver travelers in advance of their travel is "fully operational." While DHS has not announced when it plans to make this certification, it anticipates ESTA authorizations will be required for all visa waiver travelers after January 12, 2009. DHS determined that the law permits it to expand the program to countries with refusal rates between 3 and 10 percent after it makes these two certifications, and after the countries have met the required conditions, but before ESTA is mandatory for all Visa Waiver Program travelers. For DHS to maintain its authority to admit certain countries into the program, it must incorporate biometric indicators (such as fingerprints) into the air exit system by July 1, 2009. However, DHS is unlikely to meet this timeline due to several unresolved issues.In addition, DHS does not fully consider countries' overstay rates when assessing illegal immigration risks in the Visa Waiver Program. Finally, DHS has implemented many recommendations from GAO's 2006 report, including screening U.S.-bound travelers against Interpol's lost and stolen passport database, but has not fully implemented others. Implementing the remaining recommendations is important as DHS moves to expand both the program and the department's oversight responsibilities.
Federally funded highway projects are typically completed in four phases: Planning: State departments of transportation and metropolitan planning organizations begin with a vision and a set of long-term goals for their future transportation system, and translate these into long-range transportation plans and short-range plans known as transportation improvement programs. Although not required by federal law, a state department of transportation may perform additional planning once a project is started, such as consulting with resource agencies to determine the project’s potential ecosystem impacts. We refer to this final phase of planning as “pre-NEPA planning” in this report. Preliminary design and environmental review: State departments of transportation identify a project’s cost, level of service, and construction location; assess the potential effects on environmental resources as required by NEPA; and select the preferred alternative. Final design and right-of-way acquisition: State departments of transportation finalize design plans, acquire property, and relocate utilities. Construction: State departments of transportation award construction contracts, oversee construction, and accept the completed project. The Transportation Equity Act for the 21st Century lays out general requirements for transportation planning and consideration of the environment. The act requires that state and metropolitan area long-range plans consider projects and strategies that will, among other things, protect and enhance the environment. It also requires states and metropolitan planning offices to provide the public with an opportunity to comment on the transportation improvement programs. Governors review and approve metropolitan transportation improvement programs within their respective states. However, the Transportation Equity Act for the 21st Century does not specifically address how ecosystem conservation should be considered in transportation planning. The act does not require that long-range transportation plans contain projects and strategies that protect and enhance the environment, and provides no guidance on how planners are to consider ecosystem conservation. Although the Federal Highway Administration reviews and approves each state’s transportation improvement program to, among other things, ensure that the plans meet the requirements of the act, failure to meet these requirements is not reviewable in court. Congress is considering the 6-year surface transportation reauthorization bill. Separate bills have passed in each chamber. The House bill leaves in place the existing legislation’s framework of requiring planners to consider the protection and enhancement of the environment in their plans. The Senate bill provides more explicit language on environmental considerations and new consultation requirements for planners. Specifically, it indicates that protecting and enhancing the environment includes “the protection of habitat, water quality, and agricultural and forest land while minimizing invasive species.” Additionally, the Senate bill requires that long-range transportation plans include a discussion of (1) the types of potential habitat mitigation activities that may assist in compensating for habitat loss and (2) the areas that may have the greatest potential to restore and maintain habitat types affected by the plan. Further, the bill requires planning agencies to consult with state and local agencies responsible for protecting natural resources. In addition to meeting the planning requirements of the Transportation Equity Act for the 21st Century and NEPA, planning agencies must adhere to a number of other federal laws pertaining to transportation and the environment before construction can begin on federally funded projects, including: The Endangered Species Act of 1973 is intended to conserve threatened and endangered species and the ecosystems on which they depend. Section 7 of the act requires federal agencies to ensure that projects they authorize, fund, or carry out, including transportation projects, are not likely to jeopardize the continued existence of any threatened or endangered species (including fish, wildlife, and plants) or result in the destruction or adverse modification of designated critical habitat for these species. The U.S. Fish and Wildlife Service and the National Marine Fisheries Service administer and enforce this law. The Clean Water Act of 1977 is intended to restore and maintain the chemical, physical, and biological integrity of the nation’s waters through the prevention and elimination of pollution. Section 404 of the act pertains to wetland development. Under this section, the Army Corps of Engineers provides permits to transportation agencies whose projects affect wetlands. To obtain permits, applicants must first attempt to avoid adverse impacts to wetlands or, if this is not possible, to minimize the impacts to the extent practicable and compensate for any unavoidable impacts through mitigation. To comply with these and other laws, transportation planners may coordinate with a variety of state and federal agencies. They do so to obtain ecological data, such as information on threatened and endangered species and wetlands; advice on how to address adverse impacts of transportation projects; or both. Of the 36 transportation planners we interviewed, a total of 31 (21 out of 24 in state departments of transportation and 10 out of 12 in metropolitan planning organizations) reported using various methods to consider ecosystem conservation during transportation planning. Some of these 31 planning agencies begin considering ecosystem conservation in transportation planning as they develop their long-range plans while others begin considering ecosystems conservation just prior to starting the federally required environmental review under NEPA. Four of these agencies reported using multiple approaches to consider ecosystem conservation, 22 stressed their use of corridor studies or project screening, 2 emphasized their consideration of the ecological resources of specific interest in the surrounding area, and 3 reported using methods similar to other agencies but do not use corridor studies or project screening or focus on specific resources. (See fig. 2.) Planners in 5 agencies said they do not consider ecosystem conservation during transportation planning. In the absence of specific federal requirements to consider ecosystem conservation in transportation planning, federal agencies encourage state and metropolitan area planners to do so and they provide technical assistance. Of the 31 planning agencies that consider ecosystem conservation in transportation planning, 21 (68 percent) first do so as they develop their long-range plans. (See table 1.) Four agencies (13 percent) begin considering ecosystem conservation as they develop transportation improvement programs. The remaining six agencies (19 percent) begin just before starting the federally required environmental review under NEPA (pre-NEPA planning). Twenty of 31 agencies reported considering ecosystem conservation at more than one point, and 14 reported considering ecosystem conservation during corridor studies that begin at varying times during planning. Oregon, South Dakota, Colorado, and North Carolina reported extensively considering ecosystem conservation in transportation planning using several approaches. The Oregon Department of Transportation has included a policy in its long-range plan to, among other things, maintain or improve the natural and built environment, including fish passage and habitat, wildlife habitat and migration routes, vegetation, and wetlands. The long-range transportation plans of Colorado and North Carolina each contain specific references to goals or policies to conserve ecosystems, while South Dakota’s plan contains a less specific goal aimed at protecting the environment. Oregon planners said they meet monthly with state and federal resource agencies and with the Federal Highway Administration to discuss project proposals before beginning to address NEPA requirements. To plan for each project’s potential impact, the planners said they obtain data from a variety of sources, such as field studies led by biologists, the Oregon Natural Heritage Data System, the National Wetlands Inventory, and the state department of transportation’s ecological survey of all the roads in the state. The planners then use these data and a set of criteria developed by stakeholders to screen projects before programming them for construction. The South Dakota Department of Transportation becomes increasingly involved with federal and state resource agency stakeholders—including the U.S. Fish and Wildlife Service; Army Corp of Engineers; U.S. Forest Service; South Dakota Game, Fish, and Parks; and the South Dakota Department of Natural Resources—as a project evolves from a conceptual plan through final design. Initially, the department works with state resource agency stakeholders to obtain ecological data in geographic information system or paper formats that identify ecological resources located within the study boundaries and uses these data to avoid sensitive habitat. The department then develops plans to avoid, minimize, or mitigate the project’s impact. Later, when more specific project design plans become available, the department works with resource agency stakeholders to determine habitat locations, adjust project alignments to avoid habitat, or consider other design changes to minimize the project’s impact before beginning the environmental review required under NEPA. The Colorado Department of Transportation has assigned one of its employees to work with the U.S. Fish and Wildlife Service to focus on transportation issues, according to state transportation planners. The planners said that numerous stakeholders from federal, state, and nongovernmental agencies assist the department in determining species and habitat locations throughout the state and in focusing efforts on conservation and mitigation planning. The planners reported that the department is conducting advance planning to integrate ecosystem issues into corridor studies that they expect to develop over the life of the long- range plan. They also said that Colorado has established a revolving fund to acquire habitat for mitigation before specific projects are actually developed. Finally, the North Carolina Department of Transportation considers ecosystem conservation in transportation planning by making extensive use of resource agency personnel and geographic information system data. According to state planners, the department funds 33 resource agency positions to help identify and resolve ecosystem issues early in project development. The planners told us they use the geographic information system data to identify where ecosystems may conflict with transportation plans and determine the potential cost of addressing the conflicts. They said that the department, in partnership with the Army Corps of Engineers, also identifies and acquires property for future mitigation. Finally, the planners said that the department assists small metropolitan planning organizations and localities in broad-based ecosystem screening on all projects to identify any ecological issues and potential costs associated with those issues. Twenty-two of the 31 planners who consider ecosystem conservation during transportation planning conduct corridor studies or screen projects for ecosystem impact. These planners survey ecosystems in the corridor and take steps to avoid or mitigate ecological impacts. For example, planners in New Mexico, with data from their Department of Game and Fish, used corridor studies to identify areas of high potential for animal- vehicle crashes. Planners described how such planning studies led to the construction of underpasses that allow bear and deer to pass beneath highways in the state. (See fig. 3.) Nebraska reviews ecological databases to identify potential impacts of planned transportation projects; considers avoidance strategies; and, if avoidance is not possible, documents the conflict so that project designers can develop mitigation measures, according to state transportation planners. Some planning agencies screen out projects from their plans that would have undesirable ecosystem impacts. For example, metropolitan planners for the Merrimack Valley area in Massachusetts told us that they use data from a geographic information system in planning to identify ecological resources in the path of proposed projects. Using this information, together with public comments on the project, they determine whether the ecological impacts require that the project be redesigned or terminated prior to beginning the environmental review required under NEPA. Nearly all planning agencies that develop corridor studies or use ecosystem impacts to screen projects involve stakeholders in developing their plans. For example, Alaska invites federal agenciesincluding the U.S. Fish and Wildlife Service, Army Corps of Engineers, National Park Service, Bureau of Land Management, and National Marine Fisheries Serviceand its Departments of Fish and Game, and Natural Resources to meetings to provide input for transportation plans. After a meeting, each agency has the opportunity to write a letter of concern about specific resources or areas. Metropolitan planning organizations, local governments, municipal officials, tribes, elected officials, and anyone else who has expressed interest in Alaska’s transportation planning are also invited to review and comment on transportation plans. Of the 22 planning agencies that consider ecosystems by conducting corridor studies or project screening, 12 include ecosystem conservation as a policy or goal in their long-range transportation plans. For example, the Central Virginia Metropolitan Planning Organization’s long-range transportation plan calls for an assessment of the social and environmental impacts of the transportation plan’s recommendations, and establishes the policy of removing projects with unacceptably high environmental or community impacts from planning consideration. In addition to considering ecosystem conservation in transportation planning through corridor studies or as a means to screen potential projects, these 22 planning agencies reported using one or more of the following common methods either in addition to or in combination with corridor studies or screening: using resource agencies as stakeholders in developing transportation considering the views of environmental interest groups in developing using resource agency data to determine mitigation requirements, develop alternative locations, or to avoid planning projects with unacceptably high ecosystem impact; using geographic information systems to determine ecological resource providing funding for ecological impact studies; having planning agency or resource agency personnel conduct site visits to determine or confirm the location of ecological resources; and incorporating in transportation plans local plans that have considered ecosystem conservation. Six of these agencies reported using at least 4 of the methods listed above. The remaining 16 used 3 or fewer methods. Because we did not evaluate the effectiveness of these methods, the number of methods used by a planning agency does not necessarily indicate effectiveness. (See table 4 in app. IV for a summary of the specific methods that each agency reported using.) Transportation planners in Georgia told us they focus on preserving the state’s wetlands through mitigation banking. The state department of transportation has established funding accounts to purchase land for wetland mitigation banking and to pay for consultants to design wetland mitigation banks, according to planners in Georgia. They told us that the department has also entered into a memorandum of agreement with a state resource agency for the long-term maintenance of these mitigation banks. These planners said that nongovernmental organizations, including The Nature Conservancy, Georgia Trust for Public Land, and Georgia Conservancy, help identify properties for sale and conduct on-site reviews of potential sites for wetland mitigation banks. Federal resource agencies assist by reviewing proposed land acquisitions to determine if the land is suitable for use as a wetland mitigation bank, according to the planners. They added that, when transportation projects are at the conceptual design stage, state resource agencies identify wetlands, streams, and endangered species habitats that could be adversely affected by the project and point out avoidance or mitigation opportunities. Planners in Montana’s Yellowstone County/Billings metropolitan area told us that their focus is on the natural resources of the Yellowstone River corridor and the Rim Rocks. These planners said they consider ecosystem conservation in planning transportation projects that would affect these natural resources, primarily through consultations with stakeholders such as the Yellowstone River Parks Association, Bike Net, local government representatives, planning boards, and neighborhood task forces. The planners said these planning boards and neighborhood task forces are involved throughout transportation planning. The Delaware Department of Transportation, Butte County Association of Governments, California, and Madison Athens-Clarke Oconee Regional Transportation Study (the metropolitan planning organization in Athens, Georgia) reported considering ecosystem conservation in transportation planning by using some of the same methods used by other agencies but do not use corridor studies, project screening, or focus on a specific ecological resource. Each of these agencies includes ecosystem conservation as a policy or goal in its long-range transportation plan. Delaware Department of Transportation planners said they consider input from resource agencies and environmental interest groups and use geographic information system data to determine transportation projects’ potential impact on ecological resources and develop alternatives as needed. Planners at the Butte County Association of Governments told us they receive input from resource agencies to determine mitigation requirements and use geographic information system data to determine ecological resource locations. Finally, the Madison Athens-Clarke Oconee Regional Transportation Study planners said that local land use plans consider ecosystem conservation as it relates to transportation and they incorporate the local plans in the metropolitan area’s transportation plans. Planners in the Arizona, New Hampshire, and Illinois departments of transportation, as well as metropolitan planners in Great Falls City, Montana, and Montachussett, Massachusetts, said they do not consider ecosystem conservation in transportation planning and instead rely on compliance with NEPA to address ecological issues. The reported factors that discouraged these agencies from considering ecosystem conservation in transportation planning include a lack of time and resources required or guidance on how to do so. These factors are discussed in more detail in the final section of this report. Resource agency officials in 19 of the 21 states that consider ecosystem conservation in transportation planning generally agreed that they assist transportation planners in doing so. (We were not able to contact resource agency officials in the two remaining states.) However, over half (11) of these resource agency officials said that they would like to be more involved in transportation planning or that communication with their state’s department of transportation could be improved. For example, officials of the Oklahoma Department of Wildlife Conservation explained that they need to be involved early in transportation planning because the pressure from supporters of transportation projects often results in concerns about ecosystems surfacing as afterthoughts. Similarly, officials in Utah’s Division of Wildlife Resources said that they are involved too late in planning because the project design is already set and budgeting for necessary mitigation sometimes has been inadequate. Although federal law does not specifically require planners to consider ecosystem conservation in transportation plans, the Federal Highway Administration encourages state and metropolitan planners to do so by identifying and promoting exemplary initiatives that are unique, innovative, attain a high-level environmental standard, or are recognized as particularly valuable from an environmental perspective, according to the agency’s fiscal year 2004 performance plan. These could be planning or project-level initiatives that involve, for example, designing mitigation projects that support wildlife movement and habitat connectivity, developing watershed-based environmental assessment and mitigation approaches, or using wetland banking. The agency has identified eight such initiatives and plans to identify and promote at least 30 initiatives by September 30, 2007. North Carolina’s Ecosystem Enhancement Program is one of the eight exemplary initiatives that the Federal Highway Administration has identified. In view of a rapidly expanding transportation program with a high volume of projects affecting an estimated 5,000 acres of wetlands and 900,000 feet of streams over 7 years, North Carolina plans to consider and mitigate the potential impacts of many planned projects in a comprehensive manner by assessing, restoring, enhancing, and preserving ecosystem functions and compensating for impacts at the watershed level. This approach to ecosystem conservation aims to decouple ecosystem mitigation from individual project reviews. Federal Highway Administration officials believe that such integrated approaches help break down organizational barriers between state departments of transportation and state resource agencies. They added that publicizing exemplary initiatives helps show that addressing ecosystem conservation in transportation planning improves working relationships between these agencies and facilitates interagency cooperation in the future. As noted in the next section of this report, many planners and resource agency officials that we interviewed cited improved interagency relationships as a positive effect of considering ecosystem conservation in transportation planning. The U.S. Fish and Wildlife Service also encourages state departments of transportation and state resource agencies to share project planning and ecosystem information to incorporate more forethought to wildlife habitats, before project designs are set and while flexibility still exists, according to agency officials. To this end, the Service, in cooperation with the International Association of Fish and Wildlife Agencies, has conducted several regional workshops on state wildlife conservation plans. Officials told us that during these workshops they discussed how the plans could be used to provide transportation planners with important information that they could consider in transportation planning. The U.S. Fish and Wildlife Service and other federal resource agencies also administer and enforce environmental laws and generally help state planners consider ecosystem conservation by responding to requests for data and providing comments on transportation plans. The federal agencies most frequently consulted by the transportation planners we interviewed were the Fish and Wildlife Service and the Army Corps of Engineers. Transportation planners said they often ask these resource agencies to provide ecological data from geographic information systems or ecological maps to help identify and evaluate a project’s impact. Many planners also said these federal resource agencies provide technical expertise or actively participate in transportation planning. For example, a New York Department of Transportation planner told us that the Fish and Wildlife Service and the Army Corps of Engineers provide technical expertise on the long-term impacts of transportation projects on ecosystems. Regardless of the ways planning agencies consider ecosystem conservation in transportation planning, 29 of the 31 transportation planners and 16 of 19 resource agency officials we interviewed reported one or more positive effects of doing so. These officials listed fewer negative effects. Twenty-eight planners and resource agency officials reported that considering ecosystem conservation in transportation planning enabled them to avoid or reduce adverse impacts on ecological resources—the most frequently reported positive effect. (See fig. 4.) For example, planners and state resource agency officials reported preventing irreparable habitat damage in New York by changing planning from a five-lane highway to planning for a lower-impact two-lane boulevard after a study revealed that the original project would be detrimental to the surrounding habitat, and updated traffic studies indicated that the wider highway was not needed to ensure mobility; decreasing habitat fragmentation in North Carolina by using geographic information system data on state ecological resources during project planning to avoid or mitigate unacceptable potential impacts on habitat; and working with the state resource agency in Nebraska to identify preferred times for construction in order to reduce impacts on the breeding of certain species. Fifteen transportation planners and state resource agency officials reported that considering ecosystem conservation improves a project’s cost and schedule estimates. For example, planners and state resource agencies reported better project cost estimating in Colorado because planners become aware of, and can plan to avoid, unacceptable adverse impacts on ecological resources; improved schedule certainty in Massachusetts, because addressing state resource agency requirements during planning provides more certainty that projects will not need to be redesigned to meet these requirements later, during federally required environmental reviews; and improved preparedness to address ecological issues during the development of a project in California by identifying those issues early in planning. In 13 instances, transportation planners and state resource agency officials reported improved relationships between departments of transportation and state resource agencies. For example, improved relationships through partnership and coordination among stakeholders can help resolve environmental issues in a timely and predictable manner. Additional positive effects that planners and state resource agency officials cited include an increased awareness of ecosystem conservation among the transportation planning agency’s staff, an improved public image of the department of transportation, and a stimulus to consider transportation alternatives such as transit. Compared with the number of positive effects attributed to considering ecosystem conservation in transportation planning, planners and resource agency officials reported relatively few negative effects. Planners in South Dakota and at the Benton-Franklin Council of Governments, Washington, told us that considering ecosystem conservation in transportation planning requires additional cost and time. A resource agency official in Iowa said that working with planners to determine project impacts and select mitigation sites adds to the agency’s workload. Finally, planners in Louisiana noted that the general public, as well as elected officials who support specific projects, become dissatisfied with the state department of transportation when environmental issues affect a project’s delivery. Support from constituents and transportation agency personnel was the most often reported factor that encouraged planners to consider ecosystem conservation in transportation planning. The cost in staff time and money was the most often reported discouraging factor for agencies that reported considering ecosystem conservation. Planners at three of the five agencies, who said they do not consider ecosystem conservation in transportation planning, also cited the cost in time and resources, while the remaining two listed other discouraging factors. Twenty-seven of the 31 transportation planners we interviewed, who said they consider ecosystem conservation in transportation planning, cited support from within their own agencies, from political appointees, or from external constituents as a factor that motivated them to do so. (See table 2.) For example, transportation planners in Mississippi told us that their agency is committed to being environmentally aware, and that this culture has encouraged them to consider ecosystem conservation in planning. Metropolitan planners in Albany, New York, noted that their corporate culture provides a strong foundation to consider ecosystem conservation as they develop transportation plans. Similarly, metropolitan planners in central Virginia said that the planning commission’s staff are concerned about being good stewards and maintaining a balance between transportation and other concerns. The views of elected officials and agency heads were another facet of constituent support. For example, the governor of New York has strongly encouraged planners there to improve their environmental performance, and the governor of New Mexico has initiated a new program that explores several environmental issues, according to planners in those states. This support from elected officials has influenced planners in these states to consider ecosystem conservation during transportation planning. Finally, planners in Delaware and Oregon emphasized the importance of their agency leaders’ support for ecosystem conservation. In addition, the general public’s attitude toward ecosystem conservation motivated planners to consider ecosystem conservation during transportation planning. Planners in Oregon and New Mexico attributed their consideration of ecosystem conservation partly to the pro- environment culture in their states. They told us, for example, that citizens are concerned about wildlife protection and view the natural environment as a major asset to the state. Metropolitan planners in Albany, New York, told us that citizens are concerned about excessive land consumption which is one factor that encourages them to consider ecosystem conservation during transportation planning. Transportation planners also listed encouraging factors that were similar to the positive effects that were discussed earlier in this report. For example, 18 planners said that they were encouraged to consider ecosystem conservation in transportation planning by expectations of more certain cost estimates and construction schedules. Nine of these planners also listed positive effects that centered on developing more accurate cost estimates and determining more predictable project delivery dates. Similarly, seven planners listed having fewer adverse effects on ecological resources as an encouraging factor, while five of these planners also listed this as a positive effect. Planners also listed improved relationships with the state resource agencies as an encouraging factor as well as a positive effect of considering ecosystem conservation in transportation planning. Although most of the planners we interviewed reported that considering ecosystem conservation in transportation planning was beneficial, doing so presented challenges. Chief among these challenges was the staff time and money required to consider ecosystem conservation in transportation planning, reported by 23 planners, including those in Arizona, New Hampshire, and Montachusett, Massachusetts, who do not consider ecosystem conservation in transportation planning. (See table 3.) An Arizona planner said that state reductions in funding and staffing have discouraged the department from considering ecosystem conservation during transportation planning, adding that the planning department staff has been reduced by 75 percent since the mid-1990s. New Hampshire planners said they do not have sufficient funds to enter into long-range studies. Therefore, there is pressure to wait until NEPA, which requires, among other things, an assessment of the impact of proposed transportation projects on the natural and human environment. The staff time and money required was also the major discouraging factor for those planning agencies that do consider ecosystem conservation in transportation planning. For example, planners in Colorado and North Carolina told us that, while beneficial, it takes a significant amount of time and effort to develop, maintain, and provide access to the data required to consider ecosystem conservation during transportation planning. Additionally, some metropolitan area planners told us that small planning agencies are particularly hard-pressed, because of their small size, to consider ecosystem conservation. For example, a metropolitan planner in central Virginia noted that the limited funding his agency receives for long- range transportation planning precludes more focused activities to address environmental factors, even though the agency would like to do so. Similarly, metropolitan area planners in Athens, Georgia, told us their ability to conduct detailed ecological analyses during planning is very limited because they do not have enough staff. Difficulties in obtaining involvement or guidance from stakeholders was the second most often cited discouraging factor, according to the planners we interviewed. This was the chief discouraging factor mentioned by a planner in Montachusett, Massachusetts, a metropolitan planning organization that does not consider ecosystem conservation before project developers prepare environmental impact assessments under NEPA. The planner stated that the planning organization lacks guidance from the state or federal agencies on the priority of ecosystem conservation. The planner noted that the planning organization addresses all federal requirements in transportation planning, as well as those issues the state emphasizes, but ecosystem consideration has not been one of them. Planners in Utah, a state that does consider ecosystem conservation in transportation planning, told us that resource agencies prefer to comment on projects that are better defined than is typically the case when they appear in transportation planning documents. On the other hand, a Utah resource agency official told us that his agency would like to be involved in these earlier planning phases, but the state department of transportation does not notify it early enough in planning. In addition, some planners told us that they lacked guidance from stakeholders, namely state resource agencies, on how to consider ecosystem conservation in transportation planning. They noted that long- term or comprehensive plans for managing the state’s ecological resources would help them make decisions about what resources to consider during planning; however, their state resource agencies had not completed such plans. A few of the state and federal resource agencies we interviewed noted, though, that some states are developing wildlife conservation plans as part of a new federal program or other habitat management plans that they believe will be useful to state departments of transportation. Third, pressure from political leaders or project proponents to move forward in spite of ecological concerns, or because of competing priorities, also discouraged planners from considering ecosystem conservation in transportation planning. For example, planners in North Carolina told us that developers give little credence to environmental concerns. Economic development in Iowa takes precedence over ecosystem concerns, according to a planner there. A state resource agency official in Oregon echoed these sentiments, stating that, in some instances, regional transportation planners and the state department of transportation value improving economic development over conserving ecological resources. A few other planners cited additional discouraging factors. Local expectations that a project will be built, regardless of ecosystem concerns, is a discouraging factor, according to a transportation planner in North Carolina. Also, planners in three jurisdictions noted that circumstances might change between early planning for a project and its implementation. This was the chief discouraging factor for Illinois, where planners do not consider ecosystem conservation before NEPA. Finally, planners in Great Falls City-County, Montana, a jurisdiction that does not consider ecosystem conservation in transportation planning, stated that their existing policy is to rely on NEPA to assess the ecosystem and other environmental impacts of proposed transportation projects. The Department of Transportation and U.S. Army Corps of Engineers had no comments on a draft of this report. The Department of the Interior generally agreed with the information in a draft copy of this report and provided technical clarifications, which we incorporated as appropriate. See appendix V for a copy of the Department of Interior’s comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time we will send copies of this report to congressional committees with responsibilities for highway and environmental issues; the Secretary of Transportation; the Secretary of the Interior; the Administrator, Federal Highway Administration; the Director, U.S. Fish and Wildlife Service; the Commander, U.S. Army Corps of Engineers; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will be available at no charge on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact either James Ratzenberger at ratzenbergerj@gao.gov or me at siggerudk@gao.gov. Alternatively, we may be reached at (202) 512-2834. Key contributors to this report were Jaelith Hall-Rivera, Rebecca Hooper, Jessica Lucas-Judy, Edmond Menoche, James Ratzenberger, and Michelle K. Treistman. Before each telephone interview with officials at state departments of transportation and metropolitan planning organizations, we provided participants with the following questions and encouraged them to review the questions and to invite others as appropriate to participate in the interview in order to provide as accurate and complete answers as possible. Question numbers preceded by “SLR” are those referring to the development of the long-range transportation plan. Questions preceded by “ST” are those referring to the development of the state transportation improvement program. Finally, questions preceded by “SPN” refer to a phase of project planning that immediately precedes NEPA, which we termed “pre-NEPA planning.” Questions for metropolitan area planners were similarly numbered except that they began with the letter “M” to easily differentiate between the state and metropolitan planners’ questions and responses. 1) Please answer a, b and c, and follow the instructions as applicable. a) Does your state consider ecosystem conservation during the creation of the long-range transportation plan? Yes or No. If yes, answer all SLR questions. If no, answer SLR 7 and SLR 8. In either case, please also answer b and c below. b) Does your state consider ecosystem conservation during the creation of the state transportation improvement program? Yes or No. If yes, answer all ST questions. If no, answer only ST 8 and ST 9. In either case, please also answer a and c. c) Does your state consider ecosystem conservation during the pre-NEPA phase, or at any other time other than during and after NEPA? Yes or No. If yes, answer all SPN questions. If no, answer only SPN 7 and SPN 8. In either case, please also answer a and b. (Answer if applicable.) SLR1) How does your state consider ecosystem conservation during the creation of the long-range transportation plan? SLR2) What stakeholders, if any, are involved in helping you consider ecosystem conservation in the long-range transportation plan (federal or state agencies, non-government organizations, other)? SLR3) How are these stakeholders involved in helping you consider ecosystem conservation in the long-range transportation plan? SLR4) What type of ecosystem data, if any, do you include in the development of the long-range transportation plan? SLR5) Please provide any other ways, not discussed above, that your state considers ecosystem conservation when developing the long-range transportation plan. We would now like to discuss the effects of considering ecosystem conservation in developing the long-range transportation plan. SLR6) Please describe any anticipated or observed effects, positive or negative, that you can attribute to the consideration of ecosystem conservation in the long-range transportation plan. We would like to know about factors that encourage or discourage consideration of ecosystem conservation in long-range transportation planning. SLR7) Please list the three factors that have been the most important in encouraging your state to consider ecosystem conservation as the long- range transportation plan is developed. SLR8) Similarly, please list the three factors that have been the most important in discouraging your state to consider ecosystem conservation as the long-range transportation plan is developed. We would like to learn about how your state considers ecosystem conservation as it develops the state transportation improvement program. (Answer if applicable) ST1) How does your state consider ecosystem conservation during the creation of the state transportation improvement program? ST2) What stakeholders, if any, are involved in helping you consider ecosystem conservation in the state transportation improvement program (federal or state agencies, non-government organizations, other)? ST3) How are these stakeholders involved in helping you consider ecosystem conservation in the state transportation improvement program? ST4) What type of ecosystem data, if any, do you include in the development of the state transportation improvement program? ST5) Do you use project criteria that incorporate ecosystem conservation when determining which projects will be placed on the state transportation improvement program? ST6) Please provide any other ways, not discussed above, that your state considers ecosystem conservation when developing the state transportation improvement program. We would now like to discuss the effects of considering ecosystem conservation in developing the state transportation improvement program. ST7) Please describe any anticipated or observed effects, positive or negative, that you can attribute to the consideration of ecosystem conservation in the state transportation improvement program. We would like to know about factors that encourage or discourage consideration of ecosystem conservation in the creation of the state transportation improvement program. ST8) Please list the three factors that have been the most important in encouraging your state to consider ecosystem conservation as the state transportation improvement program is developed. ST9) Similarly, please list the three factors that have been the most important in discouraging your state to consider ecosystem conservation as the state transportation improvement program is developed. We would like to learn about how your state considers ecosystem conservation as it begins project development—after the project has been listed on the state transportation improvement program, but before the NEPA process begins. As previously discussed, we call this phase the “pre-NEPA” phase. (Answer if applicable) SPN1) How does your state consider ecosystem conservation during the pre-NEPA phase? SPN2) What stakeholders, if any, are involved in helping you consider ecosystem conservation during the pre-NEPA phase (federal or state agencies, non-government organizations, other)? SPN3) How are these stakeholders involved in helping you consider ecosystem conservation during the pre-NEPA phase? SPN4) What type of ecosystem data, if any, do you include in the pre- NEPA phase? SPN5) Please provide any other ways, not discussed above, that your state considers ecosystem conservation in the pre-NEPA phase. We would now like to discuss the effects of considering ecosystem conservation in the pre-NEPA phase. SPN6) Please describe any anticipated or observed effects, positive or negative, that you can attribute to the consideration of ecosystem conservation in the pre-NEPA phase. We would like to know about factors that encourage or discourage consideration of ecosystem conservation in the pre-NEPA phase. SPN7) Please list the three factors that have been the most important in encouraging your state to consider ecosystem conservation during the pre- NEPA phase. SPN8) Similarly, please list the three factors that have been the most important in discouraging your state to consider ecosystem conservation during the pre-NEPA phase. Is there anything else that you would like to tell us about considering ecosystem conservation in transportation planning? We would like to contact someone in the state resource agency (Department of Natural Resources, Department of Environmental Protection, etc.) that is most involved with your agency in considering ecosystem conservation during the transportation planning process. Please provide the name, official title, and contact information. Prior to each interview with officials at state resource agencies, we provided participants with the following questions and encouraged them to review the questions and to invite others as appropriate to participate in the interview in order to provide as accurate and complete answers as possible. “RA” precedes all question numbers so that we could easily distinguish questions and responses as those pertaining to resource agencies. RA1) The _____________ state department of transportation told us that your agency is involved in transportation planning. Please describe your involvement. RA2) How did your agency become involved in state transportation planning? RA3) Is your agency involved with metropolitan planning organizations in considering ecosystem conservation in the transportation planning process? If yes, please continue. If no, please skip to RA7. RA4) In what ways is your agency involved with metropolitan planning organizations in considering ecosystem conservation in transportation planning? RA5) What metropolitan planning organizations are you involved with? (If you do not know the names of the metropolitan planning organizations, simply list the number that you are involved with.) RA6) How did your agency become involved in metropolitan planning organization transportation planning? RA7) Does your agency collect or generate ecosystem data? Yes or No. Is it available to state departments of transportation? Is it available to metropolitan planning organizations? We would now like to discuss the effects of considering ecosystem conservation in any phase of transportation planning. RA8) Please describe any anticipated or observed effects, positive or negative, that you can attribute to the consideration of ecosystem conservation in transportation planning prior to NEPA. We would now like to ask you about factors that encourage or discourage your participation in the consideration of ecosystem conservation in transportation planning. RA9) Please list the three factors that you consider to be the most important in encouraging your agency to participate in consideration of ecosystem conservation in transportation planning. RA10) Please list the three factors that you consider the most important in discouraging your agency from participating in consideration of ecosystem conservation in transportation planning. RA11) Is there anything else you would like to tell us about considering ecosystem conservation in transportation planning? Thank you. To obtain a basic understanding of how transportation planners consider ecosystem conservation in transportation planning and how federal agencies are involved, we discussed transportation laws, regulations, and planning procedures with officials in the following agencies: Federal Highway Administration in headquarters and Phoenix, Arizona; U.S. Fish and Wildlife Service in headquarters, Phoenix and Tucson, Arizona, and Denver, Colorado; and Army Corps of Engineers in headquarters, Baltimore, Maryland, and Phoenix, Arizona. State departments of transportation, resource agencies, and metropolitan planning organizations in Virginia, Massachusetts, Wisconsin, Mississippi, and Colorado; the metropolitan planning organizations for the Washington, D.C., area and Pima County, Arizona; and state departments of transportation and resource agencies in Florida and Maryland. The American Association of State Highway Transportation Officials, Association of Metropolitan Planning Organizations, The Nature Conservancy, International Association of Fish and Wildlife Agencies, and Defenders of Wildlife. At each of these locations, we also obtained and reviewed transportation planning documents. We defined ecosystems as plants and animals and the habitats that support them. We defined planning as activities associated with developing the federally required long-range transportation plan, short-range transportation improvement program, and the nonfederally required project planning that some jurisdictions perform just prior to beginning the environmental review required by the National Environmental Policy Act (NEPA), as well as any activities, such as corridor studies, that are performed concurrently with, but independently of, federally mandated transportation planning activities. Because federal law already requires that states and local governments meet air and water quality standards, our inquiry did not include identifying whether state departments of transportation and metropolitan planning organizations were considering these issues in transportation planning. To ensure ecosystem diversity among the 12 metropolitan planning organizations in our sample, we divided the nation into quadrants containing a roughly equal number of states. Then, to ensure that our sample would reflect the varying extent to which metropolitan planning organizations consider ecosystem conservation in transportation planning, we used the results from our 2002 survey of all metropolitan planning organizations. The survey asked how much consideration, if any, they give to the impact of transportation projects on environmentally sensitive lands, such as wetlands, when they develop their transportation plans. According to their answers, we divided the metropolitan planning organizations in each quadrant into three subgroups: (1) those that indicated little or no, or some consideration; (2) those that indicated moderate consideration; and (3) those that indicated great or very great consideration. We then randomly selected one metropolitan planning organization from each of the 12 subgroups, resulting in the following sample: Benton-Franklin Council of Governments, Washington; Butte County Association of Governments, California; Capital District Transportation Commission, New York; Central Virginia Metropolitan Planning Organization, Virginia; Flagstaff Metropolitan Planning Organization, Arizona; Great Falls City-County Planning, Montana; Greensboro Transportation Advisory Committee, North Carolina; Madison Athens-Clarke Oconee Regional Transportation Study, Georgia; Merrimack Valley Planning Commission, Massachusetts; Montachusett Regional Planning Commission, Massachusetts; Waco Metropolitan Planning Organization, Texas; and Yellowstone County/Billings Metropolitan Planning Organization, Montana. To gain an understanding of the breadth and depth of each sample state’s and metropolitan planning organization’s consideration of ecosystem conservation in transportation planning, we developed a variety of questions about how planners implement this consideration, whether and how they involve stakeholders, what types and sources of data they consider, what positive and negative effects they have observed or expect to observe, and what factors encourage and discourage them from these efforts. (See app. I for a complete listing of these questions.) Through telephone interviews, we asked planners to address these questions for each of three phases of transportation planning: (1) as they develop their long-range transportation plans, (2) as they develop their short-range transportation improvement programs, and (3) in the project planning stage that immediately precedes the environmental review under NEPA. Planners reported similar effects of considering ecosystem conservation in transportation, planning and similar encouraging and discouraging factors across these three phases. Therefore, we did not report answers to these questions by phase. Appendix II contains the questions that we asked planners who we interviewed in state departments of transportation and metropolitan planning organizations. We also reviewed the available long- range transportation plans of each state and metropolitan planning organization in our samples to determine whether these plans contained goals related to ecosystem conservation. To obtain the perspectives of state resource agency officials, we asked officials at each department of transportation in our sample to identify the official at the state resource agency who was most involved with the department of transportation during planning. We conducted telephone interviews with resource agency officials in 22 of our 24 sample states, asking these officials how they participate in considering ecosystem conservation in transportation planning, whether they collect ecological data and make these data available to transportation planners, the effects that they can attribute to considering ecosystem conservation, and the factors that encourage or discourage their participation. See appendix II for a complete listing of the questions that we asked resource agency officials. In analyzing our interview responses, we used content analysis and consensus agreement among four analysts to categorize similar responses, and grouped state and metropolitan planning organizations accordingly. To increase the reliability of our coding of responses, we used consensus agreement among the same four analysts. We did not verify the accuracy of the information that we obtained in our interviews or determine whether or how the consideration of ecosystem conservation that planners described affected transportation projects or ecosystems because it was not practical to do so. However, the variety of questions that we asked of transportation planners, combined with the perspectives of resource agency officials, mitigates the potential that our results portray more extensive consideration of ecosystem conservation in transportation planning than may actually exist. Although we requested planners’ and resource agency officials’ observations about the effects of considering ecosystem conservation in transportation planning, we did not evaluate the effectiveness of their efforts, or determine whether one agency’s efforts were more effective than another’s. The results of our work cannot be projected to all states and metropolitan planning organizations. In order to make reliable generalizations, we would have needed to randomly select a larger sample of states and metropolitan planning organizations than time allowed.
The nation's roads, highways, and bridges are essential to mobility but can have negative effects on plants, animals, and the habitats that support them (collectively called ecosystems in this report). Federally funded transportation projects progress through three planning phases: long range (20 or more years), short range (3 to 5 years), and early project development, (collectively defined as planning in this report) before undergoing environmental review (which includes assessing air and water quality, ecosystems, and other impacts) required under the National Environmental Policy Act. Federal law requires planners to consider protecting and enhancing the environment in the first two phases, but does not specify how and does not require such consideration in the third phase. GAO reported on (1) the extent to which transportation planners consider ecosystem conservation in planning, (2) the effects of such consideration, and (3) the factors that encourage or discourage such consideration. GAO contacted 36 planning agencies (24 states and 12 of approximately 380 metropolitan planning organizations), as well as officials in 22 resource agencies that maintain ecological data and administer environmental laws. The Department of Transportation and U.S. Army Corps of Engineers had no comments on a draft of this report. The Department of the Interior generally agreed with the contents of our draft report. Of the 36 transportation planning agencies that GAO contacted, 31 considered ecosystem conservation in transportation planning, using a variety of methods. For example, Colorado conducts studies that incorporate ecosystem issues to guide future transportation decisions, uses advance planning to avoid or reduce impacts, and actively involves stakeholders. New Mexico uses planning studies to identify locations where wildlife are likely to cross highways and design underpasses to allow safe crossings. In the absence of specific requirements, federal agencies encourage ecosystem consideration in planning. Planners and state resource agency officials most frequently reported reduced ecosystem impacts and improved cost and schedule estimates as positive effects. For example, planners in New York changed a planned fivelane highway to a lower-impact two-lane boulevard after weighing the area's mobility needs and the project's impact on the surrounding habitat. In Massachusetts, resource agency officials said that addressing ecological requirements in planning improved schedule certainty during the federally required environmental review. Furthermore, planners and resource agency officials reported that working together has improved relationships between their agencies, thereby allowing ecosystem concerns to be resolved in a more timely and predictable manner. Officials also listed negative effects, such as higher project costs and more work for resource agencies. Constituent support from agency staff, political appointees, or the public was the most frequently reported factor (27 instances) that encouraged planners to consider ecosystem conservation. For example, New Mexico's "pro-environment" culture reportedly encourages planners to consider ecosystem conservation. The cost in time and resources of considering ecosystem conservation was most often cited as a discouraging factor (23 instances). For example, Colorado planners cited the significant amount of time needed to collect and maintain access to ecosystem data.
According to Postal Service figures, of the 177 billion pieces of mail it processed in 1994, over 118 billion pieces, or 67 percent, were categorized as bulk business mail. In fiscal year 1994, the Service recorded revenue from bulk business mail of $23.1 billion—48.4 percent of its total mail revenue. The Postal Service began offering postage discounts to mailers who presorted their mail in 1976, and in 1988 it began offering discounts for barcoding. Presort and barcode discounts are to compensate mailers for performing work that otherwise would have to be done by the Postal Service. In fiscal year 1994, about 34 percent of all First-Class mail and 92 percent of all third-class mail was discounted. According to Postal Service studies, the value of these discounts, during that year, totaled about $8 billion. Most bulk business mail is entered at Business Mail Entry Units (BMEU) and Detached Mail Units (DMU), located throughout the Postal Service’s 85 districts. DMUs are postal acceptance units located at mailers’ mail preparation facilities. BMEUs are often located in or adjacent to mail processing plants, which are postal facilities that process mail for distribution to both local and national destinations. Bulk mail is also entered at many of the 40,000 post offices located throughout the country. The Postal Service’s mail acceptance clerks are the gatekeepers for accepting bulk business mail into the mailstream. Their job is to ensure, before mail enters the Postal Service’s processing and distribution facilities, that mailers have prepared their mail in accordance with postal requirements and that discounts given for presorting and barcoding have, in fact, been earned. This is a difficult task given the time constraints and the wide variation in the way bulk business mail can be prepared and still meet Postal Service standards. If mail barcoded by a mailer is accepted by clerks but later fails to run on postal barcode sorters, the Postal Service incurs additional costs. This is because the Postal Service must rework the mail at its own expense even though it gave the mailer the barcoded rate to perform that work. Bulk mail acceptance clerks are to perform cursory verifications on all mailings and in-depth verifications on randomly selected mailings. For every in-depth verification completed, mail acceptance clerks are required to prepare a written verification report (Form 2866). Postal facilities that receive 100 or more bulk mailings during a 4-week accounting period are to prepare a consolidated bulk mail acceptance report (Form 2867) documenting the results of their in-depth verifications. Summary reports of Forms 2867 are to be used by postal managers at various times to monitor, among other things, mail volume and revenue generated through the bulk mail acceptance system. Mail acceptance supervisors are to regularly verify the work of the clerks and report the results to postal management. At Postal Service headquarters, management responsibility for the bulk business mail program resides with the Vice President of Marketing Systems, who reports to the Chief Marketing Officer and Senior Vice President. Area Vice Presidents and district managers are responsible for ensuring that bulk mail acceptance activities conform to prescribed standards within their geographic spans of control. Appendix I contains additional background information on the Service’s bulk mail acceptance system. Our objective in this report was to determine whether the current system of controls for accepting bulk business mailings reasonably assures the Postal Service that mailer-claimed discounts are granted only when earned. The scope of our review was limited primarily to the Service’s BMEUs and DMUs, which account for the majority of the bulk mail accepted by the Service. We did not review controls at other acceptance units, such as post offices and branches. To evaluate bulk mail acceptance controls, we (1) obtained and analyzed policies and procedures affecting bulk business mail acceptance; (2) visited 7 district offices located in 6 of 10 Postal Service area offices, and interviewed postal staff assigned to 17 business mail acceptance units in those districts; (3) collected and analyzed bulk mail acceptance reports that were available from 77 of 85 district offices for fiscal year 1994; and (4) interviewed various Postal Service managers and operations personnel at Postal Service headquarters and selected field locations. We selected field locations judgmentally primarily on the basis of management reports submitted by acceptance units. We also interviewed officials from eight commercial bulk business mailers at the field locations visited. Additionally, we interviewed a Postal Service contractor who is studying the feasibility of utilizing risk assessment as a means of targeting high-risk mailings, and we interviewed and obtained written information from IRS and Customs Service officials about verification methods employed by their respective agencies. We interviewed the Executive Director of the National Association of Presort Mailers to obtain information on the presort industry’s views regarding the Service’s bulk mail acceptance system. We obtained and analyzed documentation on proposed and ongoing Postal Service initiatives to improve bulk mail acceptance practices—although we did not evaluate the effectiveness of those initiatives because they are not yet fully implemented. Finally, we reviewed recent Postal Inspection Service audits on bulk mail operations and discussed ongoing work with cognizant postal inspectors. The work done for this report was part of our broader revenue protection survey that began in November 1993. In May 1994, as part of our revenue protection work, we reported on postage meter fraud. For the most part, our review of the Service’s bulk mail acceptance controls was done at Postal Service headquarters and selected field locations between February 1995 and February 1996. We did all of our work in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Postal Service. Its comments are discussed at the end of this letter and are reprinted as appendix III. The Postal Service also provided additional technical comments on the draft, which were incorporated where appropriate. It is inevitable that some revenue losses will occur in a program of this magnitude, and, as with any business enterprise, the risk of revenue losses must be weighed against the cost of establishing controls to prevent and detect such losses. The Postal Service, however, is hindered in its ability to make data-driven decisions about the adequacy of bulk mail acceptance controls. For example, the Service does not know the full extent of losses resulting from mailer preparation errors, and, furthermore, it has not sought to develop a means for identifying such losses. Rather, the Postal Service operates under the premise that since the Inspection Service and managers in charge of bulk mail acceptance have not reported large dollar losses, then such losses must not have occurred. We did not attempt to estimate the extent to which revenue losses have occurred. However, we believe that sufficient evidence exists for the Postal Service to be concerned that substantial revenue losses may have occurred and gone undetected in the bulk business mail program. In 1989 and 1990, to address what it acknowledged to be a “seat of the pants” approach to bulk mail acceptance, the Postal Service developed and implemented new management guidelines and verification requirements designed to give it reasonable assurance that significant amounts of bulk mail revenue were not being lost. Those guidelines contained specific procedures and approaches for bulk mail acceptance and provided guidance to supervisors and managers for more analytical and effective management of acceptance employees. Available documentation shows that during fiscal year 1994, the bulk business mail control system identified mailer preparation errors totaling $168 million. However, the control system fell short of providing the Postal Service with the assurance it needs that significant amounts of revenue are not being lost in the bulk business mail program, as discussed below. Available Service documentation and our interviews with Service officials indicated that a large amount of bulk mail was accepted without proper verification. This occurred because clerks often skipped required in-depth verifications of bulk mail. Additionally, supervisors frequently failed to do required follow-up verifications of acceptance clerks’ work. The Postal Service’s failure to ensure that required verifications were done, and done properly, left it vulnerable to revenue losses. Postal Service figures show that during fiscal year 1994, the Service accepted over 16.2 million bulk business mailings of various sizes, classes, and levels of preparation nationwide. Typically, over 50,000 mailings were accepted daily, and the mailings averaged about 6,900 mail pieces. According to Postal Service requirements, all of the mailings should have received a cursory review, and between 2.3 and 2.9 million should have received an in-depth verification. The Postal Service estimated that given the criteria for selecting mailings for in-depth verification, each acceptance location should have done in-depth verifications on 14 to 18 percent of the mailings received. However, available documentation shows that only about 1.7 million in-depth verifications were done—about 60 to 75 percent of the required verifications. The remaining verifications were either not done or not documented. Available documentation for fiscal year 1994 shows that about 30 percent (23 of 77) of the postal districts reporting the results of their in-depth verifications did less than the estimated minimum-required 14 percent. Among the 77 districts, the percentage of mailings verified in-depth ranged from less than 2 percent to more than 30 percent. Because acceptance procedures are not implemented uniformly throughout the United States, Postal Service managers and acceptance employees, as well as individuals in the business mail industry, said that some mailers “shop around” for the “best” acceptance unit. The Executive Director of the National Association of Presort Mailers cited inconsistencies among acceptance units as a concern of the mailing industry. At almost half of the locations we visited, officials said that heavy workloads and unscheduled leave were frequently the reasons that required in-depth verifications were not being performed. They also said that balancing the goals of doing required mail verifications and improving customer service further complicated the situation. Some acceptance unit managers we spoke with said that the verification function is secondary to the Postal Service’s goal of increasing the level of customer satisfaction. One said that this conflict makes it difficult to do all required verifications because mailings that fail verification are more likely to miss dispatch times and delivery schedules and are, therefore, likely to decrease customer satisfaction. Another reason why some of the required verifications were not done is that the Postal Service allows acceptance clerks to skip verifications without higher level approval. For example, certain mailings are designated by computer program software as requiring an in-depth verification. However, clerks can override the system and enter mail directly into the mailstream without performing the required verification. Two acceptance unit managers told us that such overrides frequently occur but that they do not keep records on the extent of this practice. They said the overrides can generally be attributed to time pressures to “keep the mail moving.” Test mailings initiated by headquarters program officials also raised questions about the adequacy of the verifications. To develop some baseline information on the quality of bulk mail presented at entry units, the Inspection Service, at the request of headquarters bulk mail acceptance program officials, agreed to submit 36 test mailings at selected bulk mail acceptance locations. Each test mailing was to be submitted as a first-time mailing and therefore required to undergo an in-depth verification. Each test mailing was to consist of about 11,000 to 12,000 pieces of third-class mail—about 25 sacks—and each sack was to contain mail preparation errors that the inspectors believed should have easily been caught by acceptance clerks. The Inspection Service completed only three test mailings before the project was discontinued at the direction of the headquarters bulk mail acceptance program officials. For the first test mailing, acceptance clerks at that location did not identify any of the errors. Moreover, although the erroneously prepared test pieces were presented as lower, third-class bulk rate mail, they were processed as First-Class mail—giving them priority over other third-class bulk business mailings. Test results were not any better for the next two mailings—the “planted” errors were not detected in the verification process. Bulk mail acceptance program officials explained that they discontinued the test mailings because they provided little useful information for improving bulk mail acceptance controls. They believed audits of mailings deposited by mailers would provide better data to assess the types of preparation errors that are slipping through the acceptance process. Accordingly, program officials replaced the test mailings with audits of mailer-deposited mailings. These audits were led by bulk mail acceptance program officials. In February 1996, bulk mail acceptance program officials said that they were still reviewing data from the audits. They said about 930 mailings were audited at 8 locations in November and December 1995. The results of those audits were not available at the time of our review. To help ensure that the required verifications are done, and done properly, the Postal Service requires that supervisors do at least four Quality Presort Verifications (QPV) every 4-week postal accounting period. The QPV entails a supervisor rechecking an in-depth verification performed by a clerk to assess performance and also identify training needs. Analysis of Postal Service data showed, however, that such verifications are frequently either not done or not reported. For example, 67 of 74 postal districts reported doing fewer than the required number of verifications—including 4 that reported doing none. The 74 districts should have done at least 111,000 QPVs but reported doing only 44,000—about 40 percent. The manager of one of the acceptance units we visited said QPVs are not being done because of a lack of supervisory staff and inadequate supervisory training on verification of mailing statements. The Service does not require supervisors directly responsible for BMEU and DMU activities to have any training relating to verification activities. In contrast, the Service requires that BMEU and DMU acceptance clerks receive 120 hours of classroom training. Our interviews at selected acceptance units showed that clerks had generally received the required training. According to Service officials, under the current system of controls, previously failed mailings can enter the mailstream at a later time or at a different BMEU or some other Service acceptance unit without the errors being corrected. Acceptance clerks do not have a reliable way of tracking failed mailings to ensure that when those mailings are resubmitted for entry into the mailstream, they can be identified and rechecked. The ability to identify and recheck previously failed mailings is necessary for clerks to verify that errors have been corrected. However, following a failed verification, mailings can lose their identity and be entered into the mailstream without the problems being identified, corrected, or additional postage being paid. To help guard against this, some acceptance locations were keeping informal records of failed mailings. Several bulk mail acceptance managers, however, believe that the effectiveness of such records, while better than nothing, is limited because the records are informal and not shared with other acceptance units. Officials from the bulk mail acceptance program office and Inspection Service provided us the following examples, which demonstrate several ways that failed mailings can be entered into the mailstream without problems being corrected or without additional postage being paid by the mailer. A mailing that failed verification at one location can enter the mailstream at another location. Mailers sometimes have permits to enter bulk mail at more than one location, which can work to their advantage since there is no exchange of information between locations concerning failed mailings. Overall, according to the bulk mail acceptance managers and postal inspectors we spoke with, the chance of a failed mailing being subjected to an in-depth verification at a second location is heavily weighted in the mailer’s favor. A mailing that failed verification at one location during one shift can enter the mailstream at the same location during a different shift. Informal records of failed mailings may help prevent some of this, but not all acceptance locations we visited kept informal records of failed mailings. A failed mailing may be combined with another mailing, thus losing its original identity. It could then enter the mailstream without further verification. Since 1988, the Postal Service has granted postage discounts for mailer-barcoded mail. However, it has been slow to provide the tools necessary to ensure that when accepted, barcoded mail meets the Service’s standards for claimed discounts. Generally, the Service’s approach to ensuring accurate, machine-readable barcodes has been to work with bulk mailers to ensure that when the mail is prepared, it meets the Service’s standards and requirements. Nevertheless, acceptance clerks are responsible for verifying that barcoded mail meets Postal Service standards. With the volume of mailer-barcoded mail increasing yearly, the Postal Service recognized the need to try to ensure more standardization of mailer-applied barcodes. In the mid-1980s, the Postal Service developed the Coding Accuracy Support System (CASS) as a quality control measure that, among other things, is intended to help ensure that mailer-applied barcodes (1) are produced using current address information and (2) match the address printed on the mail piece. To encourage mailers to have their software CASS certified, in 1991 the Postal Service began allowing barcode rates only on mailings produced using CASS-certified software. While the purpose of CASS is to ensure that mailers apply barcodes that reflect the right addresses, it does not ensure that the barcodes meet the Postal Service’s technical standards for height, width, spacing, placement, and clarity and thus can be processed on the Service’s automated barcode sorters. Bulk mail acceptance clerks are to help ensure that mailer-applied barcodes meet the Postal Service’s technical standards and can be read by its sorters. However, because of the precision required of machine-readable barcodes, acceptance clerks need special equipment, such as electronic scanners that can read barcodes, so that they can objectively verify the readability of barcodes. Postal management recognized the need for such equipment 5 years ago. For example, in a memo to regional managers in 1990, a senior Postal Service headquarters management official acknowledged that the Postal Service had a problem because it was accepting discounted, barcoded mail even though it did “. . . not have the mechanisms or capability in the Bulk Mail Acceptance Units or Detached Mail Units to properly verify the accuracy and readability of customer applied barcodes. . .” Although the Postal Service has recognized the need for special equipment to verify barcodes, at the BMEUs and DMUs we visited, clerks and managers did not have such equipment. Officials at many of the BMEUs and DMUs we visited said they check barcode readability by visual inspection, which they sometimes referred to as “eyeballing.” Many said they supplement visual inspections with such equipment as eyepieces, templates, and gauges. However, a cognizant official at Postal Service headquarters told us that such procedures are very time consuming. Available data suggest that significant losses may be occurring because of unreadable barcodes. Through fiscal year 1992, the Postal Service systematically reported some data on the amount of barcoded mail that could not be read by its automated barcode sorters. The last report produced, which covered fiscal year 1992, showed that 7.4 percent of barcoded mail sent to its sorters could not be read. In fiscal year 1992, the Service accepted 25.9 billion pieces of First-Class and third-class mailer-barcoded letter mail. If the rejection percentage for fiscal year 1992 were applicable to the mail pieces, the Service could have lost revenue ranging from $30.4 to $74.1 million on lower rate First-Class and third-class barcoded mail that could not be sorted on the Service’s sorters—depending on the method (mechanized or manual) used for processing the rejected mail pieces. During fiscal year 1994, the Service processed about 47.6 billion pieces of First-Class and third-class letter mail with mailer-applied barcodes, compared to 25.9 billion pieces just 2 years earlier—an 84-percent increase. The volume of all classes of barcoded mail processed by the Service had increased to about 70 billion by fiscal year 1995 and is expected to increase to more than 100 billion letters by fiscal year 1997 as the Postal Service offers greater incentives for barcoded mail under its mail classification reform initiative. Some of the key data needed by Postal Service management to assess the adequacy of controls and related risks do not exist. The current acceptance system does not produce information on (1) the extent to which improperly prepared mailings are entering the mail stream and the related revenue losses associated with improperly prepared mailings—including mailer-applied barcodes that do not meet the Postal Service’s standards; and (2) the amount of rework required for the Postal Service to correct improperly prepared mailings that enter the mailstream. Postal managers told us they had no way of producing historical estimates of mailer errors and related revenue losses or the rework time associated with such errors. Additionally, our work showed that reports that were to be prepared by bulk mail acceptance units and summarized for management were not always prepared or were missing key data, such as verification results. Managers at Postal Service headquarters and two district offices questioned the usefulness of the reports because of concerns about the completeness and accuracy of the data they contain. Information required in verification and acceptance reports, if properly gathered and used, could provide management at each level some measure of the effectiveness of bulk mail acceptance controls. A key element of the control system put in place in 1990 was the requirement for a revised Bulk Mail Acceptance Report (Form 2867), which was to summarize the bulk mail acceptance and verification activities of BMEUs and DMUs. This report was designed to provide management at local, regional, and Postal Service headquarters levels with consolidated information that could be used to assess the adequacy of controls over the bulk business mail acceptance system and to monitor related risks. For example, at the Postal Service headquarters level, a “critical factors report” was to be prepared to assess whether required verifications were being done, whether staffing of acceptance units was adequate, and to provide other necessary management information. During our review, management officials at several levels said that the 1992 Postal Service reorganization significantly altered postal employees’ views about bulk mail acceptance. Some district managers said they did not use information from the reports for decisionmaking purposes because the data had become unreliable. An area office official said that after the reorganization, the Postal Service ceased to regard bulk mail acceptance reports as mandatory. He stated that Postal Service headquarters did not drop the reporting requirements; rather, it never told the newly created district offices where to send the reports. Another area official said that following the 1992 restructuring, Postal Service headquarters conveyed to area offices that it no longer wanted to receive reports on bulk mail acceptance. Some area offices told their district offices that bulk mail management reports were no longer required. Postal Service headquarters program managers said that the information derived from reports that were received was of so little value that at one time they had considered eliminating them altogether. When we asked each of the Postal Service’s 85 district offices to provide us with all acceptance reports (Forms 2867) for fiscal year 1994, we found that 7 did not prepare consolidated acceptance reports for their districts. When we compared the bulk business mail revenue and volume reported on the reports with Postal Service headquarters’ estimates of total bulk business mail revenue and volume, we found that the volume and revenue reported on the acceptance reports represented only about one-half the revenue and volume estimated by Postal Service headquarters. Management was also not receiving other required information that would allow it to assess the adequacy of staffing and training at mail acceptance units. This missing information was to have been provided each quarter to management in Quality Presort Verification reports, which mail acceptance supervisors are required to fill out for consolidation and use at each successive management level, including Postal Service headquarters. Although the Postal Inspection Service has long considered bulk business mail acceptance to be a high-risk activity and has reported on a number of control weaknesses, top postal management has not provided sustained attention to ensuring that adequate controls exist for accepting bulk business mail. Required information about bulk mail acceptance that was to help management oversee the program has not been received at Postal Service headquarters or some area offices since the 1992 Postal Service reorganization. In the November 1995 issue of the Postal Bulletin, which is widely distributed to the mailing public and within the Postal Service, the Postmaster General announced that preventing revenue loss is a top priority of the Postal Service. He stated that “no business [including the Postal Service] can afford to lose thousands of dollars in uncollected revenue daily and expect to remain fiscally viable for very long.” He announced that “efforts are under way to make improvements in mail acceptance and revenue collection areas.” The Postmaster General’s sentiments, especially as they apply to bulk mail acceptance, were repeated to us by numerous postal officials, including inspectors with first-hand knowledge of the weaknesses in the bulk mail acceptance system. At the completion of our review, postal management was taking a number of actions that have the potential to significantly improve bulk mail acceptance. Postal officials told us that in October 1995, they notified all area and district offices that completing Forms 2867 was mandatory and that the forms were to be completed and forwarded to the Rates and Classification Center in Northern Virginia for summarization. In turn, summary reports are to be forwarded to Postal Service headquarters for information purposes. After the reports are reviewed, irregularities are to be referred back to the areas responsible for oversight. However, officials stated in February 1996 that even with the renewed emphasis on the Forms 2867, compliance has been spotty. They noted, for example, that for accounting period 4 (December 9, 1995, to January 5, 1996), only 51 of 85 districts submitted Forms 2867 as required—fewer than the number we obtained for fiscal year 1994. The officials suspected that compliance has been incomplete because many area and district officials came into their jobs following the 1992 reorganization and did not know or understand the significance of bulk mail reporting. Postal Service headquarters had not explained the significance. Postal officials attributed some of the problems now occurring with bulk mail acceptance to outdated manuals. Officials told us they have been working on a new manual to replace the old bulk mail acceptance manuals—DM102 and DM108. As an interim measure, officials told us that they planned to issue, in March 1996, laminated cards for bulk mail acceptance clerks to use that would include instructions on changes to bulk mail acceptance procedures that the Postal Service is ready to make immediately. Additionally, the Postal Service has recently tested, and plans to soon deploy, what it believes to be a better tool for verifying barcodes—the Automated Barcode Evaluator (ABE). According to postal officials, ABE will assist acceptance clerks in evaluating barcoded mail pieces and objectively determining whether the barcodes meet Postal Service technical standards designed to ensure that the mail piece can be sorted on the Postal Service’s automated processing equipment. In February 1996, Postal officials said they were in the process of purchasing about 260 ABEs for deployment to units that accept the most barcoded mail, and officials said they will later assess the need for additional ABEs. The Postal Service was also testing equipment, called Barcoding, Addressing, Readability Quality Utilizing Electronic Systems Technology (BARQUEST), to help its customer service representatives identify bad barcodes and work with mailers to increase and improve their barcoding. BARQUEST is used to read and electronically store images of mail pieces rejected by the Postal Service’s automated equipment at mail processing centers. It is also supposed to allow better monitoring of rejected mail and enable the Postal Service to know if mailers’ barcoding problems have been resolved. As of February 1996, the Service had deployed BARQUEST to 55 sites. It expects to deploy BARQUEST to 77 more sites by September 1996 and to 55 more sites during fiscal year 1997. Postal Service officials stated that to address the problem of failed mailings being resubmitted and accepted without correction, the Service is modifying bulk mail control system computer software to capture information, by mailer, on failed mailings. They stated this change should enable the Service to identify mailings that have failed verification and were not later identified as such when resubmitted—a situation Service officials believed would suggest that the mailer could have reentered the mail without correcting the errors. In acknowledging the need for information on the extent of losses associated with accepting improperly prepared mailings, the Postal Service said in May 1996 that it would conduct an investigative review to determine what methodologies might be applied in identifying such losses. We recognize there are a number of methodologies that the Postal Service could use to determine the extent of revenue losses. We do not know of any one particular methodology that would work best. However, we believe there are a number of possibilities that could be used, including (1) statistical sampling, (2) ad-hoc studies, (3) cooperative studies with the Inspection Service, (4) a systematic method for documenting and reporting mailings that failed to meet Postal Service standards, and (5) various combinations of these methods. Other acceptable methodologies may also exist. Nevertheless, regardless of the methodology the Postal Service employs, emphasis on identifying losses resulting from accepting barcoded mail that does not meet the Service’s standards for automation compatibility is particularly important because, with the rate reclassification initiative that becomes effective in July 1996, the vast majority of discounts granted are to be for barcoded mail. Furthermore, producing such information should not be a daunting task for the Postal Service since, until the 1992 reorganization, it routinely captured and reported the amount of barcoded mail that it was unable to process on its automated equipment. Also, in late 1994, the Chief Financial Officer/Senior Vice President of the Postal Service chartered a new revenue assurance organization and charged it with ensuring that all revenue due the Postal Service is collected. This organization is to take a leadership role in the coordination and development of effective internal controls over mail acceptance and revenue collection. The organization, which includes a Postal Service headquarters manager, 4 staff, and 1 field coordinator from each of the Postal Service’s 10 areas, was given $10 million to identify and recover $100 million in potentially uncollected revenue by the end of fiscal year 1996. While the Postal Service may be able to gain reasonable assurance that all revenue from bulk business mail is being received by modifying and more closely following the requirements in its current acceptance system, a better long-term solution may lie with the adoption of a risk-based targeting system. The Postal Service’s primary procedure for selecting bulk business mailings for in-depth verification is to randomly sample 1 in 10 of each mailer’s statements. This selection procedure for in-depth verification applies to every mailer and does not differentiate the risk associated with certain types of mailers or mailings and does not selectively target high-risk mailers or mailings for closer scrutiny. As discussed earlier, acceptance clerks often have not done the in-depth verifications called for by the Service’s random sampling plan. They often disregarded the sampling plan and entered mail directly into the mailstream without doing the required in-depth verification. Other federal agencies that collect revenue and require employees to selectively verify financial data, such as IRS and the U.S. Customs Service, have dealt with large workloads by developing more selective, risk-based sampling plans. IRS and Customs are more selective than the Postal Service in their sampling approaches. Both IRS and Customs place more emphasis on auditing those returns and inspecting imports that offer the highest potential for yielding the most significant results. IRS officials told us that prior to the early 1960s, income tax returns were chosen for audit through a costly process that relied on the agency’s most experienced revenue agents to manually “eyeball” returns to ensure taxpayers paid the correct amount of tax. Later, IRS refined this process by computerizing criteria used in the manual process. In the late 1960s, IRS began developing the system currently in use—discriminant function analysis (DIF). This multivariate statistical selection technique allows IRS to differentiate among tax returns on the basis of each return’s probability of containing errors. Instead of using a system that selects randomly from the entire universe, as the Postal Service does, IRS uses DIF to screen all individual income tax returns received annually and identify those more likely to result in a tax change. According to IRS, its system decreases the number of returns audited that produce no tax changes and reduces the amount of IRS staff and computer time needed to screen returns. IRS believes that the DIF system has significantly increased its efficiency by allowing it to concentrate its limited audit resources on those tax returns with a high probability of error, thereby helping ensure that taxpayers who might otherwise underpay, in fact, pay their fair share. Further, IRS does not have to inconvenience as many taxpayers with audits that produce no change in the tax due, which is a benefit that the Postal Service might also achieve because in-depth verifications can inconvenience mailers. Like the Postal Service and IRS, the U.S. Customs Service must balance the requirements of its mission with the expectation that enforcement will not disrupt the normal flow of business. Customs must determine whether goods entering the United States are properly classified and correctly valued. From 1842 to the early 1980s, Customs’ policy for enforcing import laws was to examine a portion of all cargo shipments, although most of those examinations were cursory. Recognizing in the early 1980s that it had to contend with increasing levels of imports, numerous demands, and limited resources, Customs shifted its trade enforcement efforts from a strategy of checking all imports to one of selecting and inspecting only high-risk imports. Customs said that it is continuing to refine and improve this system to meet present-day challenges. The Customs system focuses on compliance measurement, enhanced targeting, and trend analysis. According to the Customs Service, fiscal year 1995 marked the first year that Customs implemented a national compliance measurement program. According to Customs, it now has a compliance baseline across a multitude of importing areas, such as industry, importer, consignee, and country. Using this data, Customs said that it is targeting its fiscal year 1996 trade enforcement efforts toward the most important areas of noncompliance. Customs also is randomly selecting shipments to examine in order to monitor compliance rates and adjust its targeting of high-risk areas, as necessary. As a consequence, Customs said that it expects to increase its targeting efficiency, which will result in more productive use of its resources, and to reduce attention to areas of high compliance, thereby facilitating the flow of imports into the United States. In March 1994, the Postal Service awarded a contract to a university professor to study the feasibility of using a risk assessment approach to sampling bulk mailer statements. The professor was to determine whether the Postal Service could identify and quantify factors that could be used to select mailings or types of mailings on the basis of the relative risk of mail preparation errors. Additionally, the contract called for the professor to explore other means of improving verification procedures and is scheduled to be completed in July 1996. Postal Service officials also stated that as part of a benchmarking effort, they had contacted IRS and Customs in late 1995 regarding their methodology for targeting cases for audit/inspection. In February 1996, postal officials told us they expect to put a completely redesigned bulk mail acceptance system into place by December 1996 that incorporates a risk-based targeting system. In their comments on this report, they also said they plan to do a staffing requirements analysis as soon as design decisions are made on the new acceptance system. Additionally, they said they plan to issue a new bulk mail acceptance manual when the new acceptance system is put in place. In fiscal year 1994, the Postal Service derived 48 percent ($23 billion) of its total mail revenue from bulk business mail. Yet, weaknesses in the Postal Service’s controls for accepting bulk business mail prevent it from having reasonable assurance that all significant amounts of postage revenue due are received when mailers claim presort/barcode discounts. Postal Service headquarters recognized in the late 1980s that it needed to manage its bulk mail acceptance system more effectively and took steps to do so in 1989 and 1990. However, according to officials we spoke with, the system deteriorated after the 1992 reorganization. With an estimated $8 billion in discounts allowed in fiscal year 1994, and larger amounts expected as the Postal Service reclassifies its postage rates and moves closer to full automation in 1997, sustained top-level management attention is needed to establish and maintain adequate controls over bulk business mail acceptance. This attention can help ensure that required verifications of bulk mailings, including barcodes, are done and that any errors noted are corrected before bulk mail is accepted into the U. S. mail system. Recently, the Postal Service launched a number of initiatives to improve the bulk business mail acceptance system. It is too early to know whether these initiatives will eventually correct the internal control problems detailed in this report. However, because they do address many of the problems, we believe that if they are implemented as planned and monitored appropriately, the initiatives can improve bulk mail acceptance operations. Because it is too early for us or the Postal Service to reasonably predict the outcome of its many initiatives to improve bulk mail acceptance, we are making several recommendations. We recognize that the Service’s initiatives offer the promise of correcting many of the concerns raised in this report. However, we believe recommendations are warranted as a means of fostering sustained management attention until the bulk mail acceptance system is operating effectively and providing the Postal Service with reasonable assurance that all significant amounts of bulk mail revenues are being collected. Specifically, we recommend that the Postmaster General direct bulk mail acceptance program supervisors and managers to periodically report to appropriate Service levels on operation of the bulk mail acceptance system, initiatives, and the progress and effectiveness of related improvements so management can be reasonably assured that required mail verifications, including supervisory reviews, are done and that the results are documented as required; mailings resubmitted following a failed verification are reverified and acceptance clerks and supervisors are provided with adequate, up-to-date procedures, training, and tools necessary to make efficient and objective verification determinations; information on the extent and results of verifications, including supervisory reviews, is regularly reported to appropriate levels, including Postal Service headquarters, and that such information is used regularly to assess the adequacy of controls and staffing, training needs, and acceptance procedures; and risk becomes the prominent factor in determining mailings to be verified. Also, we recommend that the Postmaster General direct bulk mail acceptance program managers to develop methodologies that can be used to determine systemwide losses associated with accepting improperly prepared mailings. In its written comments on a draft of this report, the Service acknowledged that many long-standing problems exist with bulk mail acceptance, and it expressed confidence that the initiatives it has under way, which were cited in our report, will remedy acceptance weaknesses in the bulk mail program and address the report’s recommendations. The Postal Service said that almost all of the remedies will be in place later this year or early 1997. The Service’s written comments are included as appendix III. Only after sufficient time has elapsed can we or the Postal Service tell if these initiatives will correct the problems. The initiatives cited by the Service appear to offer promise, but they can easily falter unless there is strong and continuing top-down commitment to improving bulk mail acceptance. In commenting on a draft of this report, the Postal Service said it is putting increased emphasis on management oversight of the bulk mail acceptance function at all levels of the organization. As arranged with the Subcommittee, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will distribute copies of the report to the Postmaster General and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix IV. If you have any questions about the report, please call me on (202) 512-8387. Of the 177 billion pieces of mail processed in 1994 by the Postal Service, over 118 billion pieces, or 67 percent, were categorized as bulk mail. This mail typically arrived at Postal Service mail entry units in sacks, trays, or on pallets and was mostly business-generated. In fiscal year 1994, the Service recorded revenue from bulk business mail of $23.1 billion—48.4 percent of its total mail revenue. In 1976, the Postal Service began offering postage discounts to mailers who presorted their mail, and in 1988 it began offering discounts for barcoding. The presort and barcode discounts are to compensate mailers for performing work that otherwise would have to be done by the Postal Service. The amount of discount depends on the depth of work performed by the mailer, e.g., barcoded mail sorted in delivery point sequence receives a larger discount than nonbarcoded mail sorted to a 3-digit ZIP Code level. Over the years, the total dollar value of business mailer discounts for presorting and barcoding has grown, and is expected to continue growing, as the Postal Service moves closer to achieving its goal of having about 90 percent of all letter mail barcoded by the end of 1997. The Postal Service estimates that by 1997, 14,000 pieces of automated equipment costing about $5 billion will have been deployed to sort the mail. In fiscal year 1994, about 34 percent of all First-Class mail was discounted, and 92 percent of all third-class mail was discounted. According to Postal Service studies, the value of these discounts totaled about $8 billion. One of the Postal Service’s major long-term strategies is to fully automate mail processing by barcoding almost all letter mail and processing it on automated barcode sorting equipment. Processing letters using automation is more cost-effective than mechanized or manual processing. According to the Service, the comparative costs of processing letters are $3 per thousand using automation, $19 using mechanized letter-sorting machines, and $42 when done manually. Thus, if the Service receives a barcoded letter that must be sorted by mechanized or manual methods, its processing cost will be about 6 or 14 times the automated cost. Under a mail reclassification initiative, in which the Postal Rate Commission recommended in January 1996 new postage rates for certain mail, the discount for automation-compatible mail will increase and the discount for presort-only will decrease. For example, as recommended by the Commission, the discount for a First-Class barcoded letter sorted to a 5-digit ZIP Code level will increase from 6.2 to 8.2 cents, and the discount for a presorted-only letter will decrease from 4.6 to 2.5 cents. Similarly, the discount for a third-class barcoded letter sorted to a 5-digit ZIP Code level will increase from 15.4 to 16.5 cents, and the discount for a presorted-only letter will decrease from 13.2 to 11.1 cents. The Postal Service expects that adoption of this change, most of which was approved by the Board of Governors and will become effective July 1996, will increase the First-Class and third-class barcoded mail volumes by 7 and 12 percent, respectively. Most bulk business mail is entered at Business Mail Entry Units (BMEUs) and Detached Mail Units (DMUs), located throughout the Postal Service’s 85 districts. DMUs are postal acceptance units located at mailers’ mail preparation facilities. BMEUs are often located in or adjacent to large mail processing plants, which are postal facilities that process mail for distribution to both local and national destinations. Bulk mail is also entered at many of the 40,000 post offices located throughout the country. BMEUs typically include a parking/staging area for large trucks and other vehicles that transport mail from mailers to the BMEU facility. They also include a dock for unloading the mail; an area where acceptance clerks can inspect the mail; and a counter area where paperwork, such as mailing statements, can be examined and other business transactions can be completed. Once the mail has been accepted by a BMEU mail acceptance clerk, it moves inside the plant for processing. The Postal Service’s mail acceptance clerks are the gatekeepers for accepting bulk business mail into the mailstream. It is their job to ensure, before mail enters the Postal Service’s processing and distribution facilities, that mailers have prepared their mail in accordance with postal requirements and that discounts given for presorting and barcoding have, in fact, been earned. This is a difficult task given the time constraints and the wide variation in the way bulk business mail can be prepared and still meet Postal Service standards. If mail barcoded by a mailer is accepted by acceptance clerks but later fails to run on postal barcode sorters, then the Postal Service incurs additional costs. This is because the Postal Service must rework the mail at its own expense even though it gave the mailer the barcoded rate to perform that work. Additional processing costs incurred by the Postal Service are ultimately reflected in higher postage rates, unfairly penalizing those mailers who properly prepare their bulk business mailings. Verifications performed by mail acceptance clerks fall into two categories: (1) cursory reviews of all mailings, and (2) in-depth verifications of randomly selected mailings. In performing a cursory review, acceptance clerks are to randomly check some sacks, trays, or pallets to verify that (1) the mail is prepared as stated on the mailer’s mailing statement, (2) the number of mail pieces indicated on the mailing statement is accurate, and (3) the mailer applied the appropriate postage rates. In-depth reviews are to be performed on at least 1 in every 10 mailings submitted by each mailer. The mailing chosen for an in-depth review is to be selected at random, and, in most cases, three sacks, trays, or pallets are to be rigorously inspected to ensure that the mail was prepared correctly and that all discount qualifications were met. A mailing may fail verification for a number of reasons. For example: Mail pieces do not meet minimum or maximum size standards. Addresses are not in the Optical Character Reader’s scan area. Fonts cannot be read by the Postal Service’s automated equipment. Barcodes do not meet technical specifications. The contrast between paper and ink is insufficient. There are less than three lines used for the address block. The spacing between city, state, and ZIP Code is improper. The barcode/address can shift out of the viewing area in window envelopes. Presort mail is not labeled correctly. When verifying mailings, if the acceptance clerk determines that more than 5 percent of a mailing is not prepared correctly, then the mailing is failed. The mailer then has two options: (1) rework the mail so that it meets postal specifications and qualifies for the bulk postage rate applied for or (2) pay the additional single-piece postage rate for that percentage of the entire mailing estimated to be in error. For every in-depth inspection completed, mail acceptance clerks are required to prepare a written verification report (Form 2866). This report is used to (1) document the results of the verification, (2) notify mailers of the types of errors found, and (3) aid supervisors in performing quality presort verifications (QPV). A QPV entails a supervisor rechecking an in-depth verification performed by a clerk. Postal facilities that receive 100 or more bulk mailings during a 4-week accounting period are to prepare a consolidated bulk mail acceptance report (Form 2867) documenting the results of their in-depth verifications. At Postal Service headquarters, management responsibility for the bulk business mail program resides with the Vice President of Marketing Systems, who reports to the Chief Marketing Officer and Senior Vice President. Area Vice Presidents and district managers are responsible for ensuring that bulk mail acceptance activities conform to prescribed standards within their geographic span of control. During the late 1980s and early 1990s, the Postal Inspection Service reported to postal management, on several occasions, that existing bulk mail acceptance controls were inadequate for preventing revenue losses. In 1986, following a national audit of the Postal Service’s revenue protection program, the Inspection Service reported that procedures for mail acceptance, verification, and classification were not being effectively administered. It noted that few of the employees it interviewed felt that revenue protection was part of their job and that this lack of awareness and commitment was resulting in millions of dollars in postage not being collected. In November 1991, following a national operational audit of the bulk mail acceptance system, the Postal Inspection Service observed that bulk mailings posed a serious risk to Postal Service revenue. It concluded that Postal Service organizational changes in 1986 and 1990 had adversely affected the management oversight necessary to ensure that bulk mail acceptance programs operated as intended. The Inspection Service also concluded that this condition had increased the risk of revenue loss through noncollection of postage and unnecessary mail processing costs due to acceptance of improperly prepared bulk mailings. The Inspection Service found that internal controls at plant load operations had been allowed to deteriorate and become unreliable. It stated that this exposed the Postal Service to serious risk by allowing situations to exist where large mailings could enter the mailstream without payment of postage. In early 1993, the Inspection Service conducted a nationwide review of the Plant Verified Drop Shipment Program. The Inspection Service reported that internal controls were not effectively or consistently applied and that there was a significant risk that mail could be entered into the mailstream without payment of postage and that mailers could claim unearned discounts. Although losses were not the primary focus of its audits, the Inspection Service did document and report to management some losses during this period. For example, in fiscal year 1994, the Inspection Service documented losses totaling about $8 million. These losses, however, should not be considered all-inclusive because they were not identified in any systematic manner. Rather, they were identified as the Inspection Service was following through on customer complaints, anonymous tips, management requests, leads developed during financial audits, and leads provided by other sources. The losses resulted from mailers not paying full postage for reasons varying from understating the number of pieces being mailed to manipulating the computer software used for generating mailing statements so that the mailing statements misrepresented, in the mailer’s favor, the make-up of the mailing. In 1993, to gain a better understanding of the magnitude of the losses resulting from mailer preparation errors, the Inspection Service established a task force that is taking a more systematic long-term approach to identifying fraudulent mailings that have resulted in revenue losses in the bulk business mail program. According to Inspection Service officials, this approach is being taken in order to demonstrate to postal management the need to improve controls over bulk mail acceptance. Additionally, as of May 1996, the Inspection Service was conducting a National Coordination Audit on the topic of bulk business mail. The objectives of the audit are to (1) conduct a corporate-level review and evaluation of the alignment of the goals and objectives of bulk business mail acceptance with the CustomerPerfect!sm initiatives, and (2) provide an economic value added assessment of bulk business mail in relation to the corporate goals of the Postal Service. According to the Inspection Service, this audit will include and address the following issues: inconsistencies among acceptance units, balancing the goals of unit operations and improving customer service, conflicts between dispatch and delivery times with customer satisfaction, inability to do a “good job” due to time pressures, adequacy of training, understanding of national instructions at the local level, and identification of new initiatives affecting bulk business mail. James S. Crigler, Evaluator-in-Charge Robert W. Stewart, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the U.S. Postal Service's (USPS) controls over postage paid on presorted and barcoded mail, focusing on whether USPS controls ensure that mailer-claimed discounts are earned. GAO found that during fiscal year 1994: (1) 40 percent of required bulk mail verifications were not performed and postal supervisors did less than 50 percent of required follow-up verifications; (2) rejected mailings were resubmitted and accepted into the mail stream without proper corrections or postage; (3) mail acceptance clerks were not given adequate tools to determine whether increasing volumes of mailer-applied barcodes met USPS standards; and (4) postal management was unable to make informed decisions concerning the adequacy of bulk mail acceptance controls or determine the amount of revenue lost through improperly prepared mailings. GAO also found that: (1) USPS needs to determine the management strategy and financial investment necessary to minimize revenue loss; (2) the random method of selecting bulk business mailings for in-depth verification may not result in the best use of available staff; (3) USPS could better target its verification efforts based on risk by considering such factors as mailer histories and the postage value of mailings; (4) postal managers are developing a bulk business mail acceptance system, updating acceptance handbooks, acquiring barcoding verification equipment, and requesting field units to submit verification reports; and (5) USPS is also exploring a new risk-based approach for in-depth verification, improving revenue controls, and planning to install its new bulk mail system by 1997.
OJP, the grant-making arm of DOJ, provides grants to various organizations, including state and local governments, universities, and private foundations, that are intended to develop the nation’s capacity to prevent and control crime, administer justice, and assist crime victims. OJP’s Assistant Attorney General is responsible for overall management and oversight of OJP through setting policy and for ensuring that OJP policies and programs reflect the priorities of the President, the Attorney General, and the Congress. The Assistant Attorney General promotes coordination among the various bureaus and offices within OJP, including BJA, one of the five bureaus within OJP, and VAWO, one of OJP’s seven program offices. In fulfilling its mission, BJA provides grants for site- based programs and for training and technical assistance to combat violent and drug-related crime and help improve the criminal justice system. VAWO administers grants to help prevent, detect, and stop violence against women, including domestic violence, sexual assault, and stalking. Since 1996, OJP’s budget has increased substantially, following the passage of the Violent Crime Control and Law Enforcement Act of 1994.Figure 1 shows changes to OJP’s budget from fiscal year 1990 through fiscal year 2000 and compares those changes in relation to BJA’s budget over the same period and VAWO’s budget since its inception in 1995. One of BJA’s major grant programs is the Edward Byrne Memorial State and Local Law Enforcement Assistance Program. Under the Byrne Discretionary Grants Program, BJA provides federal financial assistance to grantees for educational and training programs for criminal justice personnel; technical assistance to state and local units of government; and projects that are replicable in more than one jurisdiction nationwide. In fiscal year 2000, BJA awarded 99 Byrne discretionary grants worth about $69 million. VAWO was created in 1995 to carry out certain programs created under the Violence Against Women Act of 1994. The Victims of Trafficking and Violence Prevention Act of 2000 reauthorized most of the exisiting VAWO programs and added new programs as well. VAWO’s mission is to lead the national effort to end violence against women, including domestic violence, sexual assault, and stalking. VAWO programs seek to improve criminal justice system responses to these crimes by providing support for law enforcement, prosecution, courts, and victim advocacy programs across the country. In addition, programs are to enhance direct services for victims, including victim advocacy, emergency shelter, and legal services. VAWO also addresses violence against women issues internationally, including working to prevent trafficking in persons. In fiscal year 2000, VAWO awarded 425 discretionary grants worth about $125 million. Appendix I discusses the growth in OJP, BJA, and VAWO budgets and provides information on the number and amount of BJA and VAWO discretionary grants awarded from fiscal year 1990 to fiscal year 2000. To meet our objectives, we conducted our work at OJP, BJA, and VAWO headquarters in Washington, D.C. We reviewed applicable laws and regulations and OJP, BJA, and VAWO policies and procedures for awarding and managing grants, and we interviewed responsible OJP, BJA, and VAWO officials, including grant managers. As agreed with your offices, we focused on monitoring activities associated with the Byrne and VAWO discretionary grant programs. In particular, we focused on grant monitoring for grants that were active during fiscal years 1999 and 2000 and supported a program or theme, rather than technical assistance or training efforts. To address our first objective, concerning OJP’s process and requirements for discretionary grant monitoring, we reviewed applicable laws and regulations and OJP policies and procedures for grant administration and grant monitoring. We also interviewed OJP, Comptroller, BJA, and VAWO staff. We obtained information about the Comptroller’s Control Desk, which maintains the official grant files and is responsible for receiving, distributing, and tracking grant documents, including financial and progress reports. To address our second objective regarding the extent to which BJA and VAWO documented their monitoring activities for discretionary grants, we reviewed representative samples of official grant files and grant manager files using a data collection instrument to record whether evidence of the required monitoring activity—progress reports, financial reports, and other required documents—were included in the files. For each grant, we also reviewed the most recent award covering 12 months to examine the documentation of specific monitoring requirements and activities, such as telephone calls and site visits. Specifically, we reviewed a random sample of 46 of 110 Byrne and 84 of 635 VAWO discretionary grants that had a program theme and were active throughout fiscal year 1999 or 2000. The results of the samples are representative of the populations from which they were drawn. We express our confidence in the precision of our sample results as a 95-percent confidence interval. Unless otherwise noted, all confidence intervals are less than or equal to plus or minus 10 percentage points. In regard to grantee financial and progress reports, we reviewed all available reports for the BJA and VAWO discretionary grants included in our sample from the initiation of the grant through December 31, 2000. We determined timeliness using dates recorded on reports compared with the specified intervals when they were supposed to be received. Also, to determine whether BJA and VAWO closeout procedures were implemented in accordance with OJP policy, we reviewed those grant files in our sample when the grant end date was between September 30, 1999, and August 31, 2000. For our review, we focused on required closeout documentation, such as precloseout contacts, closeout checklists, and final financial and progress reports. To address our third objective, regarding how BJA and VAWO determine compliance with OJP monitoring requirements, we requested information on any existing oversight and review processes relating to grant monitoring at BJA and VAWO and gathered and reviewed documentation that BJA and VAWO officials provided concerning their oversight processes. We also met with BJA and VAWO officials to obtain information on any new initiatives they had to address the oversight and management of their grants. To meet our fourth objective, on OJP’s efforts to identify and address grant management problems, we met with OJP officials to discuss their efforts, and we reviewed reports and documents they had prepared about the new Grant Management System (GMS), revisions to the OJP Handbook and associated decision documents, and other initiatives. In addition, we reviewed the DOJ Fiscal Year 2000 Performance Report and Fiscal Year 2002 Performance Plan and information on grant management developed by the DOJ’s Office of the Inspector General. Finally, to obtain information on the size and growth of BJA and VAWO grant programs within the context of OJP, we obtained and analyzed budget and resource data from OJP on grant funds and programs from fiscal years 1990 through 2000. We conducted our work between October 2000 and October 2001 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from DOJ on October 24, 2001. Its comments are discussed near the end of this letter and are reprinted as appendix III. DOJ also provided technical comments that were incorporated in the report. After BJA and VAWO award discretionary grants, OJP policies require them, in coordination with Office of the Comptroller, to monitor grants and related activities and document the monitoring results. Monitoring is done to ensure compliance with relevant statutes, regulations, policies, and guidelines; responsible oversight of awarded funds; implementation of approved programs, goals, objectives, tasks, products, time lines, and schedules; identification of issues and problems that may impede grant implementation; and implementation of adjustments by the grantee as approved by BJA or VAWO. The OJP Handbook is the basic reference for OJP policies and procedures for the administration of OJP grants, including discretionary grant monitoring. According to the Handbook, each grant manager is to prepare a monitoring plan as part of a grant manager’s memorandum recommending initial or continuation funding. The level of monitoring is to be based on the stated monitoring plan in the grant manager’s memorandum. The plan is to contain information on who will conduct the monitoring, how it will be done, and when and what type of monitoring activities are planned. Monitoring information is to be collected using such techniques as on-site visits, telephone calls, and desk reviews, which are reviews to ensure that the grant files are complete and the grantee is in compliance with the program guidelines. In addition, grant managers are to review grantee program and financial progress reports. According to OJP’s Handbook, grantees are required to submit periodic progress reports that summarize project activities to aid program and grant managers in carrying out their responsibilities for grant-supported activities. Likewise, grantees are required to submit periodic financial status reports to update OJP on how grant funds are being spent. In addition, OJP requires that grant managers close out the grant when the project period ends to ensure that the agency has received all required financial, programmatic, and audit reports and that all federal funds have been accounted for. OJP bureaus and program offices, such as BJA and VAWO, are to carry out the program (nonfiscal) monitoring aspects of the grants they award. During fiscal year 2000, BJA had approximately 20 program managers responsible for the monitoring activities for most nonformula BJA grants. Generally, each BJA program manager had responsibility for monitoring 25 to 40 discretionary grants, including those under the Byrne program. VAWO had 13 program managers, each of whom had responsibility for the program monitoring aspects for about 60 VAWO grants, including VAWO discretionary grants. The Office of the Comptroller has primary responsibility for monitoring the fiscal aspects of all OJP-awarded grants, including those awarded by BJA and VAWO. To assess grantee financial records, the Comptroller’s Monitoring Division is to perform risk-based, on-site financial reviews for a sample of grantee organizations to monitor administrative and financial capability. The Monitoring Division is to review various program and financial documents contained in the official grant files to ensure that they are complete and the documents are properly executed. For discretionary grants, the Monitoring Division conducted 36 desk reviews for Byrne and 38 for VAWO discretionary grants in fiscal year 1999 and 6 desk reviews for Byrne and 16 for VAWO discretionary grants in fiscal year 2000. In addition, the Control Desk is to maintain the official grant files and is responsible for tracking the receipt of all grantee documents. The Control Desk is to receive grantee progress and financial reports, log the date of receipt into a tracking system, file the original in the official grant file, and forward a copy to the cognizant program office. The Control Desk also is to generate a monthly report on delinquent progress and financial reports, which is distributed to responsible officials within bureaus and program offices. BJA and VAWO grant managers are responsible for ensuring that grantees submit timely progress and financial reports and are to contact the grantee if reports are delinquent. Appendix II discusses grant monitoring within the context of the grant award process throughout the life of a grant. OJP requires that grant managers document their monitoring of the grants. This documentation is to include both the grant manager’s written plan for monitoring the project and grantee progress and financial reports. In addition, OJP requires certain documentation at the time of grant closeouts from both the grantee and the grant manager. Byrne and VAWO grant files did not always contain monitoring plans, and grant managers were not consistently documenting their monitoring activities, according to the monitoring plans that we reviewed. In fact, for those award files representing the most recent 12-month period of activity for each grant, few contained records to show that such activities as telephone contacts and site visits occurred. Furthermore, the progress and financial reports did not always cover the entire period covered by the award, a few grants were missing all progress reports, and progress and financial reports were often late. For those closed grants that we reviewed, key documents, which are to ensure a final accounting of federal funds and show whether the grantee met the programmatic goals of the grant, were sometimes missing. Our review of grant files showed that monitoring plans were not always in the files. Each grant may contain one or more individual grant awards,and for each award, OJP requires that grant managers prepare a monitoring plan containing information on, among other things, who will conduct the monitoring, how it will be done, and when and what type of monitoring activities and reports are planned. Our review of documentation on the awards in 46 Byrne grant files and 84 VAWO grant files showed that an estimated 29 percent of the Byrne awards and about 11 percent of the VAWO awards did not contain a grant manager’s monitoring plan. We also compared the planned monitoring activities in the monitoring plans for the most recent 12-month period of grant activity for each of the 46 Byrne and 84 VAWO grant files to actual monitoring documentation. Of those files that contained a monitoring plan, some had specific plans for monitoring, while others did not. When the file contained a plan that outlined the type of monitoring and its frequency, our review showed that the documentation in the files was inadequate to demonstrate whether or not BJA and VAWO grant managers were consistently following the monitoring plans. For Byrne files, there were not enough monitoring plans with specific planned activities to ensure that our assessment was representative of all the awards. However, our limited comparisons of planned to actual documentation regarding phone contacts, site visits, and desk reviews revealed the following: Phone Contacts. 13 of 46 Byrne files contained monitoring plans that specified the planned number and frequency (e.g., monthly or quarterly) of telephone contacts to be made. Of those 13 files, none contained documentation to show that all of the planned number of telephone contacts occurred. Furthermore, 4 of the 13 files had documentation that showed that some, but not all, of the planned telephone contacts had been made, while the remaining 9 files contained no documentation that any telephone contacts occurred. Site Visits. 25 of 46 Byrne files contained monitoring plans that specified the number of grantee site visits to be conducted, but only 4 of 25 files contained documentation that showed that the planned number of site visits occurred. Only 1 of the 25 files had documentation that some, but not all, of the site visits had been made, and the remaining 20 of the 25 files contained no documentation that any site visits occurred. Desk Reviews. 15 of 46 Byrne files contained monitoring plans that specified the frequency of desk reviews to be conducted. However, none of the files showed evidence that any desk reviews were conducted. Our assessment of the VAWO files provided enough cases to develop a representative sample of all available VAWO monitoring plans. Specifically, our review of planned to actual monitoring activities—phone contacts, site visits, and desk reviews—showed the following: Phone Contacts. 59 of 84 VAWO files contained monitoring plans that specified the planned number and frequency (e.g., monthly or quarterly) of telephone contacts to be made. However, none contained documentation to show that the planned number of telephone contacts occurred. Furthermore, we estimate that 56 percent of VAWO files had documentation that some, but not all, of the planned telephone contacts had been made, and 44 percent of VAWO files with monitoring plans containing criteria for a specific number of telephone contacts had no documentation that any telephone contacts occurred. Site Visits. 53 of 84 VAWO files contained monitoring plans that specified the number of grantee site visits to be conducted. We estimate that 10 percent of VAWO files contained documentation that the planned number of site visits occurred. Furthermore, only 2 percent of VAWO files had documentation that even some, but not all, of the planned site visits had been made, while the remaining 88 percent of VAWO files contained no documentation that any site visits occurred. Desk Reviews. 47 of 84 VAWO files contained monitoring plans that established specific criteria for the frequency of desk reviews. However, none of the VAWO files showed evidence that any desk reviews were conducted. OJP requires that grantees file semiannual progress reports and quarterly financial reports throughout the life of a grant. Progress reports are to supply information on the activities and accomplishments of the grantee during the previous reporting period. Financial reports are to show the actual expenditures and unliquidated obligations for the reporting period (calendar quarter) and cumulative for the award. We found Byrne and VAWO grant files in which progress reports and financial reports did not cover the entire period of the grant and a few files with no evidence of progress reports. We also found that progress and financial reports were often late. Combined, these factors resulted in unaccounted periods of time, where OJP had either no information or no up-to-date information about grantee progress or financial activities. For example, we compared the time periods progress and financial reports were supposed to cover with the time periods they actually covered for each of the Byrne and VAWO grant files we reviewed. Based on our analysis of the grant files, we estimate that 70 percent of the Byrne grantshad periods of time during the grant period not covered by progress reports, and 41 percent of the Byrne grants had periods of time not covered by financial reports. Likewise, we estimate that 66 percent of the VAWO grants had periods of time during the grant period not covered by progress reports and 36 percent of the VAWO grants had periods of time not covered by financial reports. These gaps, cumulative over the life of the grant, ranged from as little as 2 weeks to over 3 years for progress reports and from 1 month to 1½ years for financial reports. With regard to progress reports, we estimate that 30 percent of Byrne grants and 18 percent of VAWO grants had more than 12 months that were not covered. With regard to financial reports, we estimate that only 4 percent of grants at both BJA and VAWO had more than 12 months that were not covered. Our analysis also showed that a few grant files did not contain any progress reports. For the files we reviewed, seven—three Byrne and four VAWO—contained no progress reports. We noted that OJP awarded one VAWO grantee two supplemental awards over the course of a 3-year period, even though there were no progress reports in the file. In total, these seven grant files without progress reports represent over $2 million in grantee funds awarded over a 5-year period. Finally, we noted examples of progress reports covering more than the required 6-month period and noted that one VAWO progress report that was submitted to cover a 2½- year period was a half-page long. Given OJP’s requirement that progress reports are to be submitted for each 6-month period of the grant, the fact that reports covered periods well beyond the required 6 months raises questions about whether a grant manager has sufficient information to monitor progress and identify any potential grantee problems. We asked BJA and VAWO officials why these seven grant files did not contain any progress reports. BJA’s Acting Director at the time of our review said that reorganizations in BJA over the last 2 years have contributed to difficulty in ensuring complete and accurate grant files. VAWO officials explained that one of the four grant files involved a grantee that had some grant implementation problems that have since been corrected and said that problems aside, the grantee had been unaware that progress reports were required for periods of grant inactivity. They said that another of the four grantees had been accustomed to the reporting requirements for formula grants, but unaware of different reporting requirements for discretionary grants. According to VAWO officials, this situation has since been corrected and the files have been made current. VAWO officials were unclear about why another grantee had not submitted progress reports. However, they said that the one grantee that had been awarded two supplemental grants is now scheduled for closeout because of the extended period of time the grantee has been delinquent in submitting progress reports. In addition, VAWO officials said that it was unacceptable for a grantee to submit a half-page progress report covering 2½ years. Our review of grant files also showed that late reports were common.OJP has specific guidance that spells out when progress and financial reports are to be filed by grantees. Progress reports are due 30 days after the close of each previous 6-month period, and financial reports are to be turned in 45 days after the end of the calendar quarter. We compared the dates progress and financial reports were supposed to be filed by grantees with the dates they were actually received by OJP’s Office of the Comptroller. We estimate that 68 percent of Byrne and 85 percent of VAWO progress reports were late. However, for both Byrne and VAWO, we estimate that about 40 percent of all reports were late by only about a month. Table 1 shows our estimates of the timeliness of Byrne and VAWO progress reports. Similarly, in the case of financial reports, our review showed an estimated 53 percent of Byrne and 54 percent of VAWO financial reports were submitted late. Similar to progress reports, about one-third of all reports were late by 30 days or less. Table 2 shows our estimates of timeliness of Byrne and VAWO financial reports. OJP’s Office of the Comptroller also found problems similar to those we identified regarding financial reports. As mentioned earlier, the Office of the Comptroller does periodic financial reviews of the official grant file. We examined Comptroller records for 22 grants also covered in our review and found they identified 8 Byrne and 9 VAWO files that were missing some financial reports and 5 Byrne and 8 VAWO files in which financial reports were late. We did not determine what was done to follow up on the late or missing financial reports found during these financial reviews. However, Comptroller procedures call for contacting the grant manager or grantee to request copies of documents that may be missing from the file and to ensure that all documentation related to the financial review is included in the official grant file before the review is closed. Our review also showed that BJA and VAWO grant managers did not always document key closeout activities for those files we examined. OJP requires that grants be closed in a timely manner and considers the process to be one of the most important aspects of grant administration. Closing out grants is the final step in a process by which OJP ensures that all required financial and progress reports and final accounting of federal funds have been received. The timeframe for completion of closeout is no more than 180 days after the end of the grant project. According to OJP guidance, as part of the closeout process, the grant manager is to review the grant file and contact the grantee about the upcoming grant end date and final report submissions. The grant manager is to use the closeout checklist as a means of ensuring that all closeout requirements—the grantee’s submission of final progress and financial reports—are met. We identified 19 closed Byrne grants and 3 closed VAWO grants that ended between September 30, 1999, and August 31, 2000. There were not enough closed cases in our sample to ensure that our assessment was representative of all grant closeouts. However, our limited review showed that some grant files did not contain required closeout materials. For example, for the Byrne grants: 15 files did not contain documentation of the precloseout contact with the 9 files did not contain closeout checklists; 10 files did not contain the final narrative progress report; and 7 files did not contain the final financial report. For the VAWO grants: 3 files did not contain documentation of the precloseout contact with the 3 files did not contain closeout checklists; 2 files did not contain the final narrative progress report; and 1 file did not contain the final financial report. BJA and VAWO officials acknowledged that their file maintenance and documentation may not have always been in compliance with OJP monitoring requirements. Officials at both agencies stated that, in some instances, lack of documentation was because of an increased workload among grant managers. In addition, VAWO officials said that since VAWO’s inception, VAWO grant managers have had the responsibility to not only monitor an increasing volume of grants, but also to develop and implement several new grant programs. In commenting on a draft of this report, the Assistant Attorney General stated BJA grant managers have had similar responsibilities. The Assistant Attorney General also commented that one reason for VAWO’s lack of documentation regarding site visits might be a past practice by which all VAWO grant managers memorandums included a standard monitoring plan that was developed in fiscal year 1995, when VAWO was responsible for monitoring one formula and one small discretionary grant program. Regardless, officials from both BJA and VAWO stated that the monitoring activities may still be taking place, even though they are not documented consistently. When asked how one would know whether a desk review had been done, one BJA official told us that desk reviews were not a specific process, rather they were the type of activities that grant managers did on a day-to-day basis. He added that since BJA had no formalized process for conducting desk reviews, no documentation was required of the grant managers upon completion. Nonetheless, OJP’s guidance for the period covered by our review stated that a desk review form is to be prepared periodically to note, among other things, all contacts, reports, and product reviews. The form is also to include issues, accomplishments, and problems, noting recommended solutions. BJA’s Acting Director at the time of our review told us that oversight of grants can and has suffered through changes at BJA, including reorganizations within BJA and the increased number of grants and greater workload for BJA grant managers. He said that there is no question that, as mentioned earlier, reorganizations contributed to the difficulty in ensuring complete and accurate grant files and cited transfers of grant files among grant managers as one reason why files were inaccurate or incomplete. He pointed out that BJA has made some changes, including the drafting of new policies and procedures, that are designed to assist grant managers in their grant responsibilities. For example, BJA’s draft procedures called for desk reviews to be performed by grant managers every 6 months or when files were transferred among grant managers. These reviews were to require that each grant manager fill out a checklist—covering such things as the completeness of grant paperwork and the timeliness of progress and financial reports—that upon completion, was to be reviewed by a branch chief. According to the Acting Director at the time of our review, BJA’s guidance, which was undated, had been drafted sometime before July 2000, but had since been subsumed into OJP’s January 2001 update to its grant monitoring procedures. At the time of our review, it was unclear whether BJA had put any of these procedures into practice. In commenting on a draft of this report, the Assistant Attorney General noted that BJA has indicated that it expects to put these procedures into practice in fiscal year 2002. In those same comments, the Assistant Attorney General also stated that like BJA, VAWO has been working on developing policies and procedures for monitoring grantees more effectively. For example, she stated that a VAWO Monitoring Working Group was formed in spring 2001. The group is working to develop a risk-based assessment tool to develop more realistic monitoring plans; pre-, post-, and on-site protocols for site visits; a standard site visit report form; and a comprehensive training program on monitoring for new employees. Also, according to the Assistant Attorney General, VAWO has developed a desk review checklist that will begin to be used in fiscal year 2002. OJP requires documentation of grant monitoring activities to provide assurance to OJP grant managers and supervisors that appropriate oversight of grant activities is taking place. It is possible that grant managers are conducting grant monitoring activities even if no documentation exists. However, without documentation, neither OJP, BJA, VAWO, nor we are positioned to tell with any certainty whether such monitoring occurred. The Comptroller General’s internal control standards require that all transactions and other significant events be clearly documented and that the documentation be readily available for examination. Appropriate documentation is an internal control activity to help ensure that management’s directives are carried out. Without such documentation, OJP, BJA, and VAWO have no assurance that grants are meeting their goals and funds are being used properly. In addition, such documentation is essential to systematically address grant performance problems. BJA and VAWO are not positioned to systematically determine grant managers’ compliance with monitoring requirements because documentation about monitoring activities is not readily available. BJA officials told us that they do not have a management information system to collect and analyze data that would help them oversee the monitoring process. Instead, they rely on staff meetings and informal discussions with staff to oversee grant monitoring activities and identify potential grantee problems. As discussed earlier, BJA’s Acting Director at the time of our review acknowledged that BJA had experienced some documentation problems and told us that, in addition to drafting the aforementioned BJA guidelines, BJA had begun to modify an officewide management information system to capture data on monitoring activities. According to BJA officials, the enhanced system is to enable grant managers to enter data on such activities as site visits and phone contacts so that they can be tracked. However, at the time of our field work, the system was still in developmental stages. In commenting on a draft of this report, the Assistant Attorney General stated that the monitoring portion of this system was deployed in October 2001. Like BJA, VAWO does not have an overall management information system to track monitoring activities. VAWO officials said that they hold weekly staff meetings during which they rely on their grant managers to proactively identify and discuss any grant problems or monitoring issues.They added that VAWO has developed a computerized site visit tracking sheet that provides information about the details of grant managers’ on- site visits. A VAWO official said that information reported in site visit reports is shared at staff meetings and is accessible to all staff on VAWO’s internal computer system. VAWO officials also indicated that they are in the process of developing a management information system that will track, in addition to site visits, other monitoring activities such as the submission of progress and financial reports. However, like BJA, VAWO’s system was still in the developmental stages at the time of our review. BJA and VAWO also do not appear to be routinely using available OJP- wide data on late progress and financial reports that could help them identify potential grantee documentation problems. As mentioned earlier, the Office of the Comptroller has primary responsibility for carrying out the monitoring of the fiscal aspects of grants awarded by OJP bureaus and program offices, and the Control Desk issues monthly reports on whether grantee progress and financial reports are late. These reports are to be forwarded to the administrative officers in the bureaus and program offices. According to OJP’s Chief of Staff at the time of our review, once the monthly report is distributed, bureaus and program offices are responsible for determining what action to take regarding delinquent reports. BJA and VAWO program officials told us that they were aware of the monthly report, but the Acting Director of VAWO’s Program Management Division told us that VAWO probably uses the report every other month. A BJA supervisor indicated that he did not receive the reports, but he could get it, as needed, from the Office of the Comptroller. The lack of systematic data associated with program monitoring activities and the documentation problems we observed raise questions about whether BJA and VAWO are positioned to measure their performance consistent with the Government Performance and Results Act of 1993. For example, in its Fiscal Year 2000 Performance Report and Fiscal Year 2002 Performance Plan, DOJ articulated a strategic goal to “prevent and reduce crime and violence by assisting state, tribal, local and community-based programs.” DOJ’s performance report and plan list various annual goals and performance measures along with performance data needed to gauge performance, including three program targets and performance measures for VAWO formula and discretionary grant programs. To gauge annual performance for the three VAWO targets, the plan and report state that needed performance data will be obtained from grantee progress reports, on-site monitoring, and VAWO program office files; will be verified and validated through a review of grantee progress reports, telephone contacts with grantees, and on-site monitoring of grantee performance by grant program managers; and contain no known limitations. Although the DOJ performance report and plan state that there are no known data limitations, inconsistent documentation and the lack of systematic data could be a serious limitation that hinders VAWO’s ability to measure whether it is achieving its goals. VAWO officials told us that they are not satisfied with the current performance measures because they do not believe they are meaningful for measuring program outcomes. They said that they have begun to work with OJP’s Office of Budget and Management Services and an outside contractor to develop new measures. They added that their goal is to have these new measures available for the fiscal year 2003 GPRA performance plan. Over the last few years, we and others, including OJP, have identified various grant monitoring problems among OJP’s bureaus and offices. OJP has begun to work with its bureaus and offices, such as BJA and VAWO, to address these problems, but it is too early to tell whether its efforts will be enough to resolve many of the issues that we and others, including OJP, have identified. Since 1996, we have testified and issued reports that document grant monitoring problems among some of OJP bureaus and offices. In 1996, we testified on the operations of the Office of Juvenile Justice and Delinquency Prevention. We found that almost all of the official discretionary grant files we reviewed contained monitoring plans, but there was little evidence that monitoring had occurred. More recently, in an October 2001 report, we observed many of the same issues concerning OJJDP’s lack of documentation of its monitoring activities. Also, in 1999, we issued a report that, among other things, addressed how OJP’s Executive Office for Weed and Seed monitors local Weed and Seed sites to ensure that grant requirements are met. We found that, during fiscal year 1998, grantees had not submitted all of the required progress reports and grant managers had not always documented the results of their on-site monitoring visits. OJP has also identified problems with grant monitoring. In 1996, an OJP- wide working group, established at the request of the Assistant Attorney General, issued a report on various aspects of the grant process, including grant administration and monitoring. Among other things, the working group found that the administration of grants, including monitoring, was not standardized within OJP; given the monitoring resources available, monitoring plans were overly ambitious, the usual result being failure to attain the level of monitoring indicated in the plans; and an OJP-wide monitoring tracking system was needed to document all monitoring activities on an individual grant and facilitate control of the monitoring process. The working group recommended that OJP establish another working group to develop detailed operating procedures, giving special attention to the issue of grant monitoring. Almost 4 years later, in February 2000, Dougherty and Associates, under contract, delivered a report to OJP containing similar findings. The report stated that that OJP lacks consistent procedures and practices for performing grant management functions, including grant monitoring, across the agency. For example, according to the report, no formal guidance had been provided to grant managers on how stringent or flexible they should be with grantees in enforcing deadlines, due dates, and other grant requirements. Also, the report stated that the official grant files were often not complete or reliable. To improve grant monitoring, the contractor recommended, among other things, that OJP develop an agencywide, coordinated and integrated monitoring strategy; standardize guidelines and procedures for conducting site-visits, product reviews, and other monitoring activities; and mandate the timeliness and filing of monitoring reports. The DOJ’s Office of Inspector General has also identified and reported on OJP-wide grant management and monitoring problems. For example, in December 2000, the Inspector General identified grant management as one of the 10 major management challenges facing DOJ. The Inspector General stated that DOJ’s multibillion-dollar grant programs are a high risk for fraud, given the amount of money involved and the tens of thousands of grantees. Among other things, the Inspector General said that past reviews determined that many grantees did not submit the required progress and financial reports and that program officials’ on-site monitoring reviews did not consistently address all grant conditions. OJP has begun to work with bureaus and offices to resolve some of the problems it and others have identified, but it is much too early to tell how effective these efforts will be in resolving these issues. In its Fiscal Year 2000 Performance Report and Fiscal Year 2002 Performance Plan developed under GPRA, OJP established a goal to achieve the effective management of grants. Among other things, DOJ plans to achieve this goal by continued progress toward full implementation of a new Grant Management System as a way of standardizing and streamlining the grant process. According to the performance report and performance plan, GMS will assist OJP in setting priorities for program monitoring and facilitate timely program and financial reports from grantees. OJP’s Chief of Staff at the time of our review told us that, in his view, the only way to ensure that staff consistently document their monitoring activities is to require grant managers to enter information about their monitoring activities, when they occur, into an automated system, like GMS. He said that currently, OJP runs the risk of losing or misplacing key documentation, especially since documents are kept in two files physically maintained by two organizations in different locations—one, the official file maintained by the Office of the Comptroller, and the other, the grant manager’s file maintained by the program office. He said that the new system currently covers grants for some OJP organizations up to the award stage, but that OJP’s goal is to include tracking information about all stages of the grant process from preaward through closeout. Although GMS may ultimately help OJP better manage the grant administration process, DOJ’s Fiscal Year 2000 Performance Report and Fiscal Year 2002 Performance Plan does not state when GMS will be expanded—either to all of the OJP components or to include the full range and scope of monitoring activities. Regarding the latter, OJP’s Director of Office of Budget and Management Services indicated that it is unlikely that GMS will cover the full range of monitoring activities; instead, OJP would be more likely to develop a monitoring management information system to capture monitoring data that would link to GMS. The Director said that OJP has formed an OJP-wide working group to further study data issues related to monitoring activities, but the group is in its preliminary stages and has yet to develop a charter to define its activities. OJP has also been working on two key efforts to enhance its ability to better control grant administration. One of these initiatives, “Operation Closeout,” was a pilot project announced in February 2000 by OJP’s Working Group on Grant Administration that was to, among other things, accelerate the grant closeout process through revised closeout guidelines and elevate the importance of the closeout function as a required procedure in the administration of grants. In November 2000, the working group announced that it had realized several of the Operation’s objectives and, working with the Office of the Comptroller, was able to reduce the backlog of grants, including some managed by BJA and VAWO, that were eligible to be closed but had not been closed. According to the Chairman of the working group, this operation closed out 4,136 outstanding grants over a 6-month period, resulting in over $30 million in deobligated funds. In September 2001, the Chairman said that OJP was going to initiate another closeout operation based on the success of the pilot. Another OJP initiative involved the development and issuance of new OJP- wide guidance for grant administration, including grant monitoring. As mentioned earlier, in January 2001, OJP released Grant Management Policies and Procedures Manual to update and codify OJP’s current policies and procedures regarding its business practices. According to OJP officials, the new guidance was developed at the direction of the former Assistant Attorney General to address overall concerns about weaknesses in the 1992 version. The guidance was developed over a period of about two years, with the goal of reengineering the grant management process based on the best practices of bureaus and offices throughout OJP. For example, the changes include some provisions pertaining to some of the aforementioned closeout activities--grantees are now given 120 days to submit final financial reports (instead of 90 days). Also, grant managers are given greater latitude to close out a grant if they have been unsuccessful in obtaining the final financial report from the grantee. OJP trained over 300 grant managers during summer 2001 and, according to the Chairman of the working group, intends to train supervisors about the new guidance in fall 2001. OJP also has drafted and plans to send a questionnaire to recently trained grant managers to identify any issues or problems with using the online manual and to identify potential training interest and topics. OJP plans to develop and send a similar questionnaire to supervisors once they are trained. However, the Chairman indicated that there are no plans to test or systematically monitor compliance with the new guidelines to ensure that grant managers are fulfilling their responsibilities. He said that OJP had not contemplated testing or systematic monitoring because of other initiatives currently under way. Because BJA and VAWO discretionary grant files were insufficiently documented neither OJP, BJA, VAWO, or we can determine the level of monitoring being performed by grant managers as required by OJP and the Comptroller General’s internal control standards. BJA and VAWO supervisors rely on staff meetings and discussions with staff to alert them to grantee problems or grant monitoring issues, but these activities are not sufficient to ensure that required monitoring is taking place or that the proper documentation is occurring. Furthermore, BJA and VAWO do not have readily available data on most monitoring activities that would help them determine grant managers’ compliance with OJP guidelines, and even when data are available, it is not clear that supervisors use the data to ensure that monitoring activities occurred. The lack of systematic data, combined with poor documentation, limits BJA and VAWO’s ability to manage the grant monitoring process so that they can determine whether grant managers are monitoring grantees, and if not, why not, or if so, why are they not documenting their activities. It also hinders BJA and VAWO’s ability to measure their performance consistent with GPRA, especially given that DOJ is relying on data collected through grant monitoring to measure the Department’s performance for many of its grant programs. Furthermore, it places BJA and VAWO at risk in ensuring that the millions of dollars in discretionary grant funds that they distribute are effectively and responsibly managed. Grant monitoring problems have been long-standing at DOJ, and although OJP has taken steps intended to resolve some of them, it is too early to tell whether these steps will effectively solve the types of documentation problems that we and others have identified. Automation of the grant management process, particularly in regard to grant monitoring, holds some promise if OJP takes steps to ensure that all monitoring activities are consistently recorded and maintained in a timely manner. However, current and future efforts will be futile unless OJP and its bureaus and offices, such as BJA and VAWO, periodically test grant manager compliance with OJP requirements and take corrective actions to enforce those requirements. To facilitate and improve the management of program monitoring, we recommend that the Attorney General direct BJA and VAWO to review whether the documentation problems we identified were an indication of grant monitoring requirements not being met or of a failure to document activities that did, in fact, take place. If monitoring requirements are not being met, we recommend that the Attorney General direct BJA and VAWO to determine why this is so and to take into account those reasons as they consider solutions for improving compliance with the requirements. If it is determined that required monitoring is taking place but is not being documented, we recommend that the Attorney General take steps to direct BJA and VAWO to periodically articulate and enforce clear expectations regarding documentation of monitoring activities. We also recommend that the Attorney General direct OJP to study and recommend ways to establish an OJP-wide approach for systematically testing or reviewing official and program files to ensure that the grant managers in its various bureaus and offices are consistently documenting their monitoring activities in accordance with OJP requirements and the Comptroller General’s internal control standards. Furthermore, we recommend that the Attorney General direct OJP to explore ways to electronically compile and maintain documentation of monitoring activities to facilitate (1) more consistent documentation among grant managers; (2) more accessible oversight by bureau and program office managers; and (3) sound performance measurement, consistent with GPRA. We provided a copy of a draft of this report to the Attorney General for review and comment. In a November 9, 2001, letter, the Assistant Attorney General commented on the draft. Her comments are summarized below and presented in their entirety in appendix III. The Assistant Attorney General said that overall, the report provides useful information in highlighting management and monitoring activities in need of improvement. She noted that BJA and VAWO have already taken steps to address the recommendations for follow-up action included in the draft report. For example, the Assistant Attorney General said that BJA has taken steps to, among other things, expand its grants tracking system to include tracking of staff and grantee contacts and instituted a policy that a desk review be conducted twice per year for all grants. With regard to VAWO, the Assistant Attorney General said that, among other things, VAWO had established a monitoring working group tasked with developing monitoring policies and procedures for monitoring grantees more effectively, including more realistic monitoring plans and a standardized site visit reporting format. We acknowledge that BJA and VAWO appear to be taking steps in the right direction toward resolving some of the issues we identified. However, until these actions become operational, BJA and VAWO will not be able to determine whether the problems we identified constitute either a failure to carry out required monitoring activities or a failure to document monitoring activities. Once BJA and VAWO make this determination, they will be better positioned to consider what additional steps need to be taken, such as articulating and enforcing clear expectations regarding the documentation of monitoring activities. In her letter, the Assistant Attorney General did not address our recommendations that OJP (1) study and recommend ways to establish an OJP-wide approach for systematically testing or reviewing official and program grant files or (2) explore ways to electronically compile and maintain documentation of monitoring activities. Although the steps BJA and VAWO are taking may help them better understand and act upon problems associated with the documentation of monitoring activities, the steps discussed in the Assistant Attorney General’s letter appear to be actions specific to those organizations. Thus, it is unclear whether and to what extent those actions can be applied throughout OJP. Without a more focused and concerted effort to implement an OJP-wide approach for systematically testing or reviewing program grant files and an automated approach to compile and document monitoring data, OJP could continue to face the grant monitoring problems we and others, including OJP, have identified. In addition to the above comments, the Assistant Attorney General made a number of suggestions related to topics in this report. We have included the Assistant Attorney General's suggestions in the report, where appropriate. Also, the Assistant Attorney General provided other comments for which we did not make changes. See appendix III for a more detailed discussion of the Assistant Attorney General's comments. We are sending copies of this report to the Chairmen of the Senate Judiciary Committee and its Subcommittee on Youth Violence; Chairmen and Ranking Minority Members of the House Committee on Education and the Workforce; Attorney General; OJP Assistant Attorney General; BJA Administrator; VAWO Administrator; and Director, Office of Management and Budget. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact John F. Mortin or me at (202) 512-8777. Key contributors to this report are acknowledged in appendix IV. The Office of Justice Programs (OJP) and its bureaus and offices, including the Bureau of Justice Assistance (BJA) and the Violence Against Women Office (VAWO), experienced budget growth in the latter half of the 1990s, following the passage of the 1994 Crime Act. According to data we obtained from OJP, in the 1990s, the yearly number of BJA discretionary awards and total dollar amount of those awards fluctuated somewhat, but generally increased. The number of Byrne discretionary grant awards have decreased since a high point in fiscal year 1994; however, the total yearly dollar amount has increased overall. Furthermore, while the yearly number of Byrne awards and the dollar amount of those awards generally increased, there were some yearly decreases. Following the creation of VAWO in 1995, the yearly number of discretionary awards and total dollar amount of those awards increased overall. According to OJP data, from 1990 through 2000, OJP’s budget grew, in constant fiscal year 2000 dollars, by 323 percent, from about $916 million in fiscal year 1990 to nearly $3.9 billion in fiscal year 2000. Our analysis of the OJP data also showed that BJA’s budget grew during the 1990s, but to a lesser extent than OJP’s. BJA’s budget increased by 173 percent, from about $618 million in fiscal year 1990 to nearly $1.7 billion in fiscal year 2000. In fiscal year 1996, OJP and BJA’s budget increased sharply after enactment of the 1994 Crime Act. BJA’s budget as a percentage of OJP’s budget decreased from about 67 percent in fiscal year 1990 to 57 percent in fiscal year 1996. Following its creation in 1995, VAWO’s budget increased by 42 percent, from $176 million in fiscal year 1996, its first full year of funding, to about $250 million in fiscal year 2000. Figure 2 shows the growth in overall OJP, BJA, and VAWO budgets, illustrating how BJA and VAWO fit into the overall OJP budget from fiscal years 1990 through 2000. For this report, we analyzed the growth in the number and dollar amount of discretionary awards each year by BJA and VAWO from 1990 through 2000, based on data provided by OJP. From 1990 to 2000, the number of BJA Byrne discretionary awards increased by about 83 percent—from 54 in fiscal year 1990 to 99 in fiscal year 2000. Overall, the total number of BJA discretionary awards, including Byrne discretionary awards, increased by about 320 percent—from 65 in fiscal year 1990 to 273 in fiscal year 2000. As shown in figure 3, the overall increase in BJA discretionary awards for the 11-year period we analyzed was moderate, with some yearly decreases. Along with the increased number of discretionary awards by BJA, the yearly total dollar amount of those awards also increased. As illustrated in figure 4 below, the total dollar amount of Byrne discretionary awards increased, in constant fiscal year 2000 dollars, by 256 percent—from about $19 million in fiscal year 1990 to almost $69 million in fiscal year 2000. The total dollar amount of all BJA discretionary awards, including Byrne discretionary awards, increased even more—by about 422 percent—from just over $36 million in fiscal year 1990 to nearly $189 million in fiscal year 2000. The increase in the yearly total dollar amount of Byrne discretionary awards was moderate, with some yearly decreases. For all BJA discretionary awards, particularly non-Byrne discretionary awards, the total dollar amount increases were substantial in fiscal years 1998 through 2000. Figure 4 shows the dollar amount of BJA discretionary awards (Byrne and non-Byrne) from fiscal year 1990 to 2000. Data provided by OJP showed that at VAWO, since its inception in 1995, the yearly total number and dollar amount of its discretionary awards also increased. As shown in the figures below, the yearly number of VAWO discretionary awards increased by about 362 percent—from 92 in fiscal year 1996, the first full year of funding to 425 in fiscal year 2000. In addition, the yearly dollar amount of VAWO discretionary awards increased by about 940 percent—from just over $12 million in fiscal year 1996, the first full year of funding, to about $125 million in fiscal year 2000. Figure 5 shows the number of VAWO discretionary awards for each fiscal year from 1995 to 2000. Figure 6 shows the dollar amount of VAWO discretionary awards for the same period. OJP awards two types of grants: formula and discretionary. OJP formula grants are awarded directly to state governments, which then make subawards to state and local unites of government. Discretionary grants can be awarded to states, local units of government, Indian tribes and tribal organizations, individuals, educational institutions, private nonprofit organizations, and private commercial organizations. With some discretionary grant programs, OJP has some flexibility in selecting topics as well as grantees. Some discretionary awards are competitive, while others are non-competitive, owing to limited amount of funds available to a limited number of potential recipients. Figure 7 summarizes the life of a discretionary grant from application to closeout. Once the application for the discretionary grant is accepted, OJP guidance requires the grant manager to prepare a grant manager’s memorandum, which OJP reviews before the award is made. The memorandum is to consist of the following, among other things: an overview of the project; a detailed description of what type of activities the grantee plans to a discussion of the financial justification of the grant funds and of the cost- effectiveness evaluation of the application; and a discussion of past assessments, where applicable. As part of a grant manager’s memorandum, each grant manager is to prepare a monitoring plan. The plan is to contain information on who will conduct the monitoring, how it will be done, and when and what type of monitoring activities are planned. Monitoring information is to be collected using such techniques as on-site visits, telephone calls, and desk reviews, which are reviews to ensure that the grant files are complete and that the grantee is in compliance with the program guidelines. Also, OJP guidance requires the grantee to file specific reports with the Office of the Comptroller: semiannual progress reports that summarize project activities and quarterly financial reports that provide an accounting of grant expenditures. The Office of the Comptroller is to forward the reports to grant managers. OJP is to apply the same process to supplemental awards as it does to the original award of a grant. When the grantee requests an extension, thus requiring supplemental funding, the grantee must repeat the award application process, and more time and money is expended. At the end of the grant period, the grant manager is required to close out the grant according to OJP guidance. Closing out grants is the final step in a process by which OJP ensures that all required financial and progress reports and a final accounting of federal funds have been received. Figure 8 illustrates the grant process, including supplemental or extended funding. The following are GAO’s comments on the Department of Justice’s November 9, 2001, letter. 1. We report on the growth in the number and dollar amount of VAWO discretionary grants since fiscal year 1996 in appendix I (see pp. 34-35). 2. We have amended the Background section of the report to add this information (see p. 6). 3. We disagree. As we stated in the report (see p. 2), financial monitoring was not within the scope of our work. It is important to note that the scope of our work was based on agreements with our requesters and was not influenced by whether or not financial monitoring information is included in OJP’s annual financial statement audit. 4. According to the OJP Handbook: Policies and Procedures for the Administration of OJP Grants (Feb. 1992), official grant files kept by the Office of Comptroller Control Desk are to contain documents relating to each grantee, including progress and financial reports and site visit reports. In addition, for documentation to be readily available for examination, as required by the Comptroller General’s internal control standards, keeping them in the official grant files seems appropriate. 5. As we reported, in our review of closeout procedures, we waited more than the required 180 days before reviewing grant files to allow sufficient time for BJA and VAWO to complete the grant closeout process (see footnote 24, p. 18). However, the files we reviewed did not contain the required closeout documents. 6. As we reported, BJA and VAWO officials told us that supervisors discuss monitoring activities with staff through informal discussions or meetings, which could include one-on-one meetings with staff. As we stated, it is possible that grant managers are conducting grant monitoring activities even if no documentation exists. However, without documentation, neither OJP, BJA, VAWO, nor we are positioned to tell with any certainty whether such monitoring occurred (see pp. 20-21). 7. We have amended the report to add some of this information (see p. 7). As discussed in comment 1, we report on the growth in the number and dollar amount of VAWO discretionary grants in appendix I (see pp. 34-35). 8. We disagree. Financial monitoring was not part of our review as clearly stated in our introduction. Therefore, we do not believe that the title of this section, as stated, implies that financial monitoring was part of our review. 9. We agree that the grant files did not always contain documentation and acknowledge that the lack of documentation does not necessarily indicate whether monitoring did or did not occur. As we stated in comment 6, without required progress reports and other documentation, neither OJP, BJA, VAWO, nor we are positioned to tell with any certainty whether such monitoring occurred. 10. We have amended the report to add most of this information (see p. 20). A discussion of the development of VAWO’s management information system can be found on page 21 of the report. 11. As discussed in comment 6 and as we reported, it is possible that grant managers are conducting grant monitoring activities even if no documentation exists. However, without documentation, neither OJP, BJA, VAWO, nor we are positioned to tell with any certainty whether such monitoring occurred. 12. We disagree. As we reported, the Comptroller General’s internal control standards require that all transactions and other significant events be clearly documented and that the documentation be readily available for examination. Appropriate documentation is an internal control activity to help ensure that management’s directives are carried out. Without such documentation, OJP, BJA, and VAWO have no assurance that grants are meeting their goals and funds are being used properly. 13. We have amended the report to incorporate the first of these two changes (see p. 31). However, as illustrated in figures 3 and 4, the annual number of Byrne awards and the dollar amount of those awards have generally increased, although there were some yearly decreases (see pp. 33-34). 14. We reported that, according to VAWO officials, VAWO grant managers have sometimes been responsible for additional duties beyond grant monitoring over the last few years (see p. 19). In addition to the above, Kristeen G. McLain, Samuel A. Caldrone, Dennise R. Stickley, Keith R. Wandtke, Jerome T. Sandau, Michele C. Fejfar, and Ann H. Finley made key contributions to this report.
GAO reviewed grant monitoring and evaluation efforts by the U.S. Department of Justice's (DOJ) Office of Justice Program (OJP). This report discusses the monitoring of discretionary grants awarded by the Bureau of Justice Assistance's (BJA) Byrne Program and the Violence Against Women Office (VAWO) within OJP. In constant 2000 dollars, Byrne and VAWO discretionary grants grew about 85 percent--from $105 million to $194 million between fiscal years 1997 and 2000. These funds were awarded to state and local governments, either on a competitive basis or pursuant to legislation allocating funds through congressional earmarks. BJA and VAWO, together with OJP's Office of the Comptroller, are responsible for monitoring these grants to ensure they are implemented as intended, are responsive to grant goals and objectives, and comply with statutory regulations and policy guidelines. OJP's monitoring requirements include the development of monitoring plans that articulate who will conduct monitoring, the manner in which it will be done, and when and what type of monitoring activities are planned. Grant managers are to maintain documentation in grant files using such techniques as written reports of on-site reviews and telephone interview write-ups. GAO's review of 46 Byrne and 84 VAWO discretionary grants indicated that only 29 percent of Byrne and 11 percent of VAWO award files contained monitoring plans. In addition, for awards covering the most recent 12-month period, grant managers were not consistently documenting their monitoring activities. BJA and VAWO cannot systematically oversee grant managers' compliance with monitoring requirements because documentation is not readily available. Both BJA and VAWO rely on staff meetings and discussions to identify grant problems or monitoring issues, and neither have management information systems to compile and analyze data on monitoring activities. OJP has begun to work with its bureaus and offices to address grant management problems, but it is too early to tell whether OJP's efforts will be effective.
Pursuant to Executive Order 13327, the administration has taken several key actions to strategically manage real property. FRPC was established in 2004, which subsequently created interagency committees to work toward developing and implementing a strategy to accomplish the executive order. FRPC developed a sample asset management plan and published Guidance for Improved Asset Management in December 2004. In addition, FRPC established asset management principles that form the basis for the strategic objectives and goals in the agencies’ asset management programs and also worked with GSA to develop and enhance an inventory system known as the Federal Real Property Profile (FRPP). FRPP was designed to meet the executive order’s requirement for a single database that includes all real property under the control of executive branch agencies. The FRPC, with the assistance of the GSA Office of Government-wide Policy, developed 23 mandatory data elements, which include four performance measures. The four performance measures are utilization, condition index, mission dependency, and annual operating and maintenance costs. In addition, a performance assessment tool has been developed, which is to be used by agencies to analyze the inventory’s performance measurement data in order to identify properties for disposal or rehabilitation. In June 2006, FRPC added a data element for disposition that included six major types of disposition, including sale, demolition, or public benefit conveyance. Finally, to assist agencies in their data submissions for the FRPP database, FRPC provided standards and definitions for the data elements and performance measures through guidance issued on December 22, 2004, and a data dictionary issued by GSA in October 2005. The first governmentwide reporting of inventory data for FRPP took place in December 2005, and selected data were included in the fiscal year 2005 FRPP published by GSA, on behalf of FRPC, in June 2006. Data on the four performance measures were not included in the FRPP report. Adding real property asset management to the PMA has increased its visibility as a key management challenge and focused greater attention on real property issues across the government. OMB has identified goals related to the four performance measures in the inventory for agencies to achieve in right-sizing their real property portfolios and it is the administration’s goal to reduce the size of the federal real property inventory by 5 percent, or $15 billion, by disposing of unneeded assets by 2015. In October 2006, the administration reported that $3.5 billion in unneeded federal real property had been disposed of since 2004. To achieve these goals and gauge an agency’s success in accurately accounting for, maintaining, and managing its real property assets so as to efficiently meet its goals and objectives, the administration established the real property scorecard in the third quarter of fiscal year 2004. The scorecard consists of 13 standards that agencies must meet to achieve green status, which is the highest status. These 13 standards include 8 standards needed to achieve yellow status, plus 5 additional standards. An agency reaches “green” or “yellow” status if it meets all of the standards for success listed in the corresponding column in figure 1 and red if it has any of the shortcomings listed in the “red” column. OMB evaluates agencies quarterly on progress and agencies then have an opportunity to update OMB on their status towards achieving green. According to PMA real property scorecards, for the second quarter of fiscal year 2007, the Department of Labor is the only real property-holding agency included in the real property initiative that failed to meet the standards for yellow status as shown in figure 2. All of the other agencies, have, at a minimum, met the standards for yellow status. Among the 15 agencies under the real property initiative, 5 agencies—GSA NASA, Energy, State, and VA—have achieved green status. According to OMB, the agencies achieving green status have established 3-year timelines for meeting the goals identified in their asset management plans; provided evidence that they are implementing their asset management plans; used real property inventory information and performance measures in decision making; and managed their real property in accordance with their strategic plan, asset management plan, and performance measures. Once an agency has achieved green status, OMB continues to monitor its progress and results through PMA using deliverables identified in its 3-year timeline and quarterly scorecards. Each quarter, OMB also provides formal feedback to agencies through the scorecard process, along with informal feedback, and clarifies expectations. Yellow status agencies still have various standards to meet before achieving green. In addition to addressing their real property initiative requirements, some agencies have taken steps toward addressing some of their long-standing problems, including excess and underutilized property and deteriorating facilities. Some agencies are implementing various tools to prioritize reinvestment and disposal decisions on the basis of agency needs, utilization, and costs. For example, GSA officials reported that GSA’s Portfolio Restructuring Strategy sets priorities for disposal and reinvestment based on agency missions and anticipated future need for holdings. In addition, GSA developed a methodology to analyze its leased inventory in fiscal year 2005. This approach values leases over their life, not just at the point of award; considers financial performance and the impact of market rental rates on current and future leasing actions; and categorizes leases by their risk and value. Additionally, some agencies are taking steps to make the condition of core assets a priority and address maintenance backlog challenges. For example, Energy officials reported establishing budget targets to align maintenance funding with industry standards as well as programs to reduce the maintenance backlogs associated with specific programs. In addition, Interior officials reported that the department has conducted condition assessments for 72,233 assets as of fourth quarter fiscal year 2006. As mentioned previously, Executive Order 13327 requires that OMB, along with landholding agencies, develop legislative initiatives to improve federal real property management and establish accountability for implementing effective and efficient real property management practices. Some individual agencies have obtained legislative authority in recent years to use certain real property management tools, but no comprehensive legislation has been enacted. Some agencies have received special real property management authorities, such as the authority to enter into EUL agreements. These agencies are also authorized to retain the proceeds of the lease and to use them for items specified by law, such as improvement of their real property assets. DOD, Energy, Interior, NASA, USPS, and VA are authorized to enter into EUL agreements and have authority to retain proceeds from the lease. These authorities vary from agency to agency, and in some cases, these authorities are limited. For example, NASA is authorized to enter into EUL agreements at two of its centers, and VA’s authority to enter into EUL agreements expires in 2011. In addition, VA was authorized in 2004 to transfer real property under its jurisdiction or control and to retain the proceeds from the transfer in a capital asset fund for property transfer costs, including demolition, environmental remediation, and maintenance and repair costs. VA officials noted that although VA is authorized to transfer real property under its jurisdiction or control and to retain the proceeds from such transfers, this authority has significant limitations on the use of any funds generated by any disposal under this authority. Additionally, GSA was given the authority to retain proceeds from disposal of its real property and to use the proceeds for its real property needs. Agencies with enhanced authorities believe that these authorities have greatly improved their ability to manage their real property portfolios and operate in a more businesslike manner. Overall, the administration’s efforts to raise the level of attention to real property as a key management challenge and to establish guidelines for improvement are noteworthy. The administrative tools, including asset management plans, inventories, and performance measures, were not in place to strategically manage real property before we updated our high- risk list in January 2005. The actions taken by major real property-holding agencies and the administration to establish such tools are clearly positive steps. However, these administrative tools and the real property initiative have not been fully implemented, and it is too early to determine if they will have a lasting impact. Implementation of these tools has the potential to produce results such as reductions in excess property, reduced maintenance and repair backlogs, less reliance on leasing, and an inventory that is shown to be reliable and valid. Although clear progress has been made toward strategically managing federal real property and addressing some long-standing problems, real property remains a high–risk area because the problems persist and obstacles remain. Agencies continue to face long-standing problems in the federal real property area, including excess and underutilized property, deteriorating facilities and maintenance and repair backlogs, reliance on costly leasing, and unreliable real property data. Federal agencies also continue to face many challenges securing real property. These problems are still pervasive at many of the major real property-holding agencies, despite agencies’ individual attempts to address them. Although the changes being made to strategically manage real property are positive and some realignment has taken place, the size of agencies’ real property portfolios remains generally outmoded. As we have reported, this trend largely reflects a business model and the technological and transportation environment of the 1950s. Many of these assets and organizational structures are no longer needed; others are not effectively aligned with, or responsive to, agencies’ changing missions. While some major real property-holding agencies have had some success in attempting to realign their infrastructures in accordance with their changing missions, others still maintain a significant amount of excess and underutilized property. For example, officials with Energy, DHS, and NASA—which are three of the largest real property-holding agencies—reported that over 10 percent of the facilities in their inventories were excess or underutilized. The magnitude of the problem with underutilized or excess federal real property continues to put the government at risk for lost dollars and missed opportunities. Table 1 describes the status of excess and underutilized real property challenges at the nine major real property- holding agencies. Addressing the needs of aging and deteriorating federal facilities remains a problem for major real property-holding agencies. According to recent estimates, tens of billions of dollars will be needed to repair or restore these assets so that they are fully functional. Furthermore, much of the federal portfolio was constructed over 50 years ago, and these assets are reaching the end of their useful lives. Energy, NASA, GSA, Interior, State, and VA reported repair and maintenance backlogs for buildings and structures that total over $16 billion. In addition, DOD reported a $57 billion restoration and modernization backlog. We found that there was variation in how agencies reported data on their backlog. Some agencies reported deferred maintenance figures consistent with the definition used for data on deferred maintenance included in their financial statements. Others provided data that included major renovation or restoration needs. More specifically, For DOD, facilities restoration and modernization requirements total over $57 billion. Officials noted that the backlog does not reflect the impact of 2005 Base Realignment and Closures (BRAC) or related strategic rebasing decisions that will be implemented over the next several years. For Energy, the backlog in fiscal year 2005 for a portfolio valued at $85.2 billion was $3.6 billion. For Interior, officials reported an estimated maintenance backlog of over $3 billion for buildings and other structures. GSA’s current maintenance backlog is estimated at $6.6 billion. For State, the maintenance backlog is estimated at $132 million, which includes all of the deferred/unfunded maintenance and repair needs for prior fiscal years. For NASA, the restoration and repair backlog is estimated at over $2.05 billion as of the end of fiscal year 2006. For VA, the maintenance backlog for facilities with major repair needs is estimated at $5 billion, and according to VA officials, VA must address this aged infrastructure while patient loads are changing. Many of the major real property-holding agencies continue to rely on leased space to meet new space needs. As a general rule, building ownership options through construction or purchase are often the least expensive ways to meet agencies’ long-term requirements. Lease purchases—under which payments are spread out over time and ownership of the asset is eventually transferred to the government— are often more expensive than purchase or construction but are generally less costly than using ordinary operating leases to meet long-term space needs. For example, we testified in October 2005 that for the Patent and Trademark Office’s long-term requirements in northern Virginia, the cost of an operating lease was estimated to be $48 million more than construction and $38 million more than lease purchase. However, over the last decade we have reported that GSA—as the central leasing agent for most agencies— relies heavily on operating leases to meet new long-term needs because it lacks funds to pursue ownership. Operating leases have become an attractive option, in part because they generally “look cheaper” in any given year, even though they are often more costly over time. Under current budget scorekeeping rules, the budget generally should record the full cost of the government’s commitment. Operating leases were intended for short-term needs and thus, under the scorekeeping rules, for self-insuring entities, only the amount needed to cover the first year lease payments plus cancellation costs needs to be recorded. However, the rules have been stretched to allow budget authority for some long-term needs being met with operating leases to be spread out over the term of the lease, thereby disguising the fact that over time, leasing will cost more than ownership. Resolving this problem has been difficult; however, change is needed because the current practice of relying on costly leasing to meet long-term space needs result in excessive costs to taxpayers and does not reflect a sensible or economically rational approach to capital asset management, when ownership would be more cost effective. Five of the nine largest real property-holding agencies—Energy, Interior, GSA, State, and VA—reported an increased reliance on operating leases to meet new space needs over the past 5 years. According to DHS officials, per review of GSA’s fiscal year 2005 and 2006 lease acquisition data for DHS, there has been no significant increase in GSA acquired leased space for DHS. In addition, officials from NASA and USPS reported that their agency’s use of operating leases has remained at about the same level over the past 5 years. We did not analyze whether the leasing activity at these agencies, either in the aggregate or for individual leases, resulted in longer-term costs than if these agencies had pursued ownership. For short-term needs, leasing likely makes economic sense for the government in many cases. However, our past work has shown that, generally speaking, for long-term space needs, leasing is often more costly over time than direct ownership of these assets. While the administration and agencies have made progress in collecting standardized data elements needed to strategically manage real property, the long-term benefits of the new real property inventory have not yet been realized, and this effort is still in the early stages. The federal government has made progress in revamping its governmentwide real property inventory since our 2003 high-risk designation. The first governmentwide reporting of inventory data for FRPP took place in December 2005, and GSA published the data on behalf of FRPC, in June 2006. According to the 2005 FRPP report, the goals of the centralized database are to improve decision making with accurate and reliable data, provide the ability to benchmark federal real property assets, and consolidate governmentwide real property data collection into one system. According to FRPC, these improvements in real property and agency performance data will result in reduced operating costs, improved asset utilization, recovered asset values, and improved facility conditions, among others. It is important to note that real property data contained in the financial statements of the U.S. government have also been problematic. The CFO Act, as expanded by the Government Management Reform Act, requires the annual preparation and audit of individual financial statements for the federal government’s 24 major agencies. The Department of the Treasury is also required to compile consolidated financial statements for the U.S. government annually, which we audit. In March 2007, we reported that— for the tenth consecutive year—certain material weaknesses in internal controls and in selected accounting and financial reporting practices resulted in conditions that continued to prevent us from being able to provide the Congress and the American people with an opinion as to whether the consolidated financial statements of the U.S. government were fairly stated in conformity with U.S. generally accepted accounting principles. Further, we also reported that the federal government did not maintain effective internal control over financial reporting (including safeguarding assets) and compliance with significant laws and regulations as of September 30, 2006. While agencies have made significant progress in collecting the data elements from their real property inventory databases for the FRPP, data reliability is still a problem at some of the major real property-holding agencies and agencies lack a standard framework for assessing the validity of data used to populate the FRPP. Quality governmentwide and agency- specific data are critical for addressing the wide range of problems facing the government in the real property area, including excess and unneeded property, deterioration, and security concerns. Despite the progress made by the administration and individual agencies in recent years, decision makers historically have not had access to complete, accurate, and timely data on what real property assets the government owns; their value; whether the assets are being used efficiently; and what overall costs are involved in preserving, protecting, and investing in them. Also, real property-holding agencies have not been able to easily identify excess or unneeded properties at other agencies that may suit their needs. For example, in April 2006, the DOD Inspector General (IG) reported weaknesses in the control environment and control activities that led to deficiencies in the areas of human capital assets, knowledge management, and compliance with policies and procedures related to real property management. As a result, the military departments’ real property databases were inaccurate, jeopardizing internal control over transactions reported in the financial statements. Compounding these issues is the difficulty each agency has in validating its real property inventory data that are submitted to FRPP. Validation of individual agencies’ data is important because the data are used to populate the FRPP. Because a reliable FRPP is needed to advance the administration’s real property initiative, ensuring the validity of data that agencies provide is critical. In general, we found that agencies’ efforts to validate the data for the FRPP are at the very early stages of development. For example, according to Interior officials, the department had designed and was to begin implementing a program of validating, monitoring, and improving the quality of data reported into FRPP in the last quarter of fiscal year 2006. Furthermore, according to OMB staff, there is no comprehensive review or validation of data once agencies submit their real property profile data to OMB. OMB staff reported that both OMB and GSA review agency data submissions for variances from the prior reporting period. However, agencies are required to validate their data prior to submission to the GSA- managed database. OMB staff reported that some agencies, as part of the PMA initiative, have provided OMB with plans for ensuring the quality of their inventory and performance data. OMB staff reported that OMB has not, to date, requested these plans of all agencies. OMB staff reported that agencies provide OMB with information that includes the frequency of data updates and any methods used for data validation. In addition, according to OMB staff, OMB relies on the quality assurance and quality control processes performed by individual agencies. Also, OMB staff noted that they rely on agency IGs, agency financial statements, and our reviews to establish the validity of the data. Furthermore, OMB staff indicated that a one-size-fits-all approach to data validation would be difficult to implement. Nonetheless, a general framework for data validation that could guide agencies in this area would be helpful, as agencies continue their efforts to populate the FRPP with data from their existing data systems. A framework for FRPP data validation approaches could be used in conjunction with the more ad hoc validation efforts OMB mentioned to, at a minimum, suggest standards for frequency of validation, validation methods, error tolerance, and reporting on reliability. Such a framework would promote a more comprehensive approach to FRPP data validation. In our recent report, we recommended that OMB, in conjunction with the FRPC, develop a framework that agencies can use to better ensure the validity and usefulness of key real property data in the FRPP. The threat of terrorism has increased the emphasis on physical security for federal real property assets. All of the nine agencies reported using risk-based approaches to some degree to prioritize facility security needs, as we have suggested; but some agencies cited challenges, including a lack of resources for security enhancements and issues associated with securing leased space. For example, DHS officials reported that the department is working to further develop a risk management approach that balances security requirements and the acquisition of real property and leverages limited resources for all its components. In many instances, available real property requires security enhancements before government agencies can occupy the space. Officials reported that these security upgrades require funding that is beyond the cost of acquiring the property, and, therefore, their acquisition is largely dependent on the availability of sufficient resources. While some agencies have indicated that they have made progress in using risk-based approaches, some officials told us that they still face considerable challenges in balancing their security needs and other real property management needs with their limited resources. According to GSA officials, obtaining funding for security countermeasures, both security fixtures and equipment, is a challenge, not only within GSA, but for GSA’s tenant agencies as well. In addition, Interior and NASA officials reported that their agencies face budget and resource constraints in securing real property. Interior officials further noted that despite these limitations, incremental progress is made each year in security. Given their competing priorities and limited security resources, some of the major real property-holding agencies face considerable challenges in balancing their security and real property management needs. We have reported that agencies could benefit from specific performance measurement guidance and standards for facility protection to help them address the challenges they face and help ensure that their physical security efforts are achieving the desired results. Without a means of comparing the effectiveness of security measures across facilities, particularly program outcomes, the U.S. government is open to the risk of either spending more money for less effective physical security measures or investing in the wrong areas. Furthermore, performance measurement helps ensure accountability, since it enables decision makers to isolate certain activities that are hindering an agency’s ability to achieve its strategic goals. Performance measurement can also be used to prioritize security needs and justify investment decisions so that an agency can maximize available resources. Despite the magnitude of the security problem, we noted that this area is largely unaddressed in the real property initiative. Without formally addressing security, there is a risk that this challenge could continue to impede progress in other areas. The security problem has an impact on the other problems that have been discussed. For example, to the extent that funding will be needed for a sustained investment in security, the funding available for repair and restoration, preparing excess property for disposal, and improving real property data systems may be further constrained. Furthermore, security requires significant staff time and other human capital resources and thus real property managers may have less time to manage other problems. In past high-risk reports, we called for a transformation strategy to address long-standing real property problems. While the administration’s current approach is generally consistent with what we envisioned and the administration’s central focus on real property management is a positive step, certain areas warrant further attention. Specifically, problems are exacerbated by underlying obstacles that include competing stakeholder interests and legal and budgetary limitations. For example, some agencies cited local interests as barriers to disposing of excess property. In addition, agencies’ limited ability to pursue ownership often leads them to lease property that they could more cost-effectively own over time. Another obstacle—the need for improved long-term capital planning— remains despite OMB efforts to enhance related guidance. Some major real property-holding agencies reported that competing local, state, and political interests often impede their ability to make real property management decisions, such as decisions about disposing of unneeded property and acquiring real property. For example, VA officials reported that disposal is often not an option for most properties because of political stakeholders and constituencies, including historic building advocates or local communities that want to maintain their relationship with VA. In addition, VA officials said that attaining the funding to follow through on Capital Asset Realignment for Enhanced Services (CARES) decisions is a challenge because of competing priorities. Also, Interior officials reported that the department faces significant challenges in balancing the needs and concerns of local and state governments, historical preservation offices, political interests, and others, particularly when coupled with budget constraints. Other agencies cited similar challenges related to competing stakeholder interests. If the interests of competing stakeholders are not appropriately addressed early in the planning stage, they can adversely affect the cost, schedule and scope of a project. Despite its significance, the obstacle of competing stakeholder interests has gone unaddressed in the real property initiative. It is important to note that there is precedent for lessening the impact of competing stakeholder interests. BRAC decisions, by design, are intended to be removed from the political process, and Congress approves BRAC decisions as a whole. OMB staff said they recognize the significance of the obstacle and told us that FRPC would begin to address the issue after the inventory is established and other reforms are initiated. Without addressing this issue, however, less than optimal decisions that are not based on what is best for the government as a whole may continue. As discussed earlier, budgetary limitations that hinder agencies’ ability to fund ownership leads agencies to rely on costly leased space to meet new space needs. Furthermore, the administrative complexity and costs of disposing of federal property continue to hamper some agencies’ efforts to address their excess and underutilized real property problems. Federal agencies are required by law to assess and pay for any environmental cleanup that may be needed before disposing of a property—a process that may require years of study and result in significant costs. As valuable as these legal requirements are, their administrative complexity and the associated costs of complying with them create disincentives to the disposal of excess property. For example, we reported that VA, like all federal agencies, must comply with federal laws and regulations governing property disposal that are intended, for example, to protect subsequent users of the property from environmental hazards and to preserve historically significant sites. We have reported that some VA managers have retained excess property because the administrative complexity and costs of complying with these requirements were disincentives to disposal. Additionally, some agencies reported that the costs of cleanup and demolition sometimes exceed the costs of continuing to maintain a property that has been shut down. In such cases, in the short run, it can be more beneficial economically to retain the asset in a shut-down status. Given that agencies are required to fund the costs of preparing property for disposal, the inability to retain any of the proceeds acts as an additional disincentive. It seems reasonable to allow agencies to retain enough of the proceeds to recoup the costs of disposal, and it may make sense to permit agencies to retain additional proceeds for reinvestment in real property where a need exists. However, in considering whether to allow federal agencies to retain proceeds from real property transactions, it is important for Congress to ensure that it maintains appropriate control and oversight over these funds, including the ability to redistribute the funds to accommodate changing needs. In our recent report, we recommended that OMB, in conjunction with the FRPC, develop an action plan for how the FRPC will address key problems, including the continued reliance on costly leasing in cases where ownership is more cost effective over the long term, the challenges of securing real property assets, and reducing the effect of competing stakeholder interests on businesslike outcomes in real property decisions. Over the years, we have reported that prudent capital planning can help agencies to make the most of limited resources, and failure to make timely and effective capital acquisitions can result in acquisitions that cost more than anticipated, fall behind schedule, and fail to meet mission needs and goals. In addition, Congress and OMB have acknowledged the need to improve federal decision making regarding capital investment. A number of laws enacted in the 1990s placed increased emphasis on improving capital decision-making practices and OMB’s Capital Programming Guide and its revisions to Circular A-11 have attempted to address the government’s shortcomings in this area. Our prior work assessing agencies’ implementation of the planning phase principles in OMB’s Capital Programming Guide and our Executive Guide found that some agencies’ practices did not fully conform to the OMB principles, and agencies’ implementation of capital planning principles was mixed. Specifically, while agencies’ capital planning processes generally linked to their strategic goals and objectives and most of the agencies we reviewed had formal processes for ranking and selecting proposed capital investments, the agencies have had limited success with using agencywide asset inventory systems and data on asset condition to identify performance gaps. In addition, we found that none of the agencies had developed a comprehensive, agencywide, long-term capital investment plan. The agency capital investment plan is intended to explain the background for capital decisions and should include a baseline assessment of agency needs that examines existing assets and identifies gaps and help define an agency’s long-term investment decisions. In January 2004, we recommended that OMB begin to require that agencies submit long-term capital plans to OMB. Since that report was issued, VA— which was one of our initial case study agencies—issued its first 5-year capital plan. However, the results of follow-up work in this area showed that although OMB now encourages such plans, it does not collect them, and the agencies that were included in our follow-up review do not have agency wide long-term capital investment plans. OMB agreed that there are benefits from OMB review of agency long-term capital plans, but that these plans should be shared with OMB on an as-needed basis depending on the specific issue being addressed and the need to view supporting materials. Shortcomings in the capital planning and decision-making area have clear implications for the administration’s real property initiative. Real property is one of the major types of capital assets that agencies acquire. Other capital assets include information technology, major equipment, and intellectual property. OMB staff said that agency asset management plans are supposed to align with the capital plans but that OMB does not assess whether the plans are in alignment. We found that guidance for the asset management plans does not discuss how these plans should be linked with agencies’ broader capital planning efforts outlined in the Capital Programming Guide. In fact, OMB’s asset management plan sample, referred to as the “shelf document,” which agencies use to develop the asset management plans, makes no reference to the guide. Without a clear linkage or crosswalk between the guidance for the two documents, there is less assurance that agencies will link them. Furthermore, there could be uncertainty with regard to how real property goals specified in the asset management plans relate to longer term capital plans. The executive order on real property management and the addition of real property to the PMA have provided a good foundation for strategically managing federal real property and addressing long-standing problems. These efforts directly address the concerns we raised in past high-risk reports about the lack of a governmentwide focus on real property management problems and generally constitute what we envisioned as a transformation strategy for this area. However, these efforts are in the early stages of implementation, and the problems that led to the high-risk designation—excess property, repair backlogs, data issues, reliance on costly leasing, and security challenges—still exist. As a result, this area remains high risk until agencies show significant results in eliminating the problems by, for example, reducing inventories of excess facilities and making headway in addressing the repair backlog. Furthermore, the current efforts lack an overall framework for helping agencies ensure the validity of real property data in FRPP and do not adequately address the costliness of long-term leases and security challenges. While the administration has taken several steps to overcome some obstacles in the real property area, the obstacle posed by competing stakeholder interests has gone largely unaddressed, and the linkage between the real property initiative and broader agency capital planning efforts is not clear. Focusing on these additional areas could help ensure that the problems and obstacles are addressed. We made three recommendations to OMB’s Deputy Director for Management in our April 2007 report on real property high risk issues. OMB agreed with the report and concurred with its recommendations. We recommended that the Deputy Director, in conjunction with FRPC, develop a framework that agencies can use to better ensure the validity and usefulness of key real property data in the FRPP. At a minimum, the framework would suggest standards for frequency of validation methods, error tolerance, and reporting on reliability. OMB agreed with our recommendation and reported that it will work with the FRPC to take steps to establish and implement a framework. For our second recommendation to develop an action plan for how the FRPC will address key problems, OMB said that the FRPC is currently drafting a strategic plan for addressing long-standing issues such as the continued reliance on costly leasing in cases where ownership is more cost effective over the long-term, the challenge of securing real property assets, and reducing the effect of competing stakeholder interests on businesslike outcomes in real property decisions. OMB agreed that it is important to build upon the substantial progress that has been realized by both the FRPC and the federal real property community in addressing the identified areas for improvement. OMB said that it will share the strategic plan with us once it is in place and will discuss strategies for ensuring successful implementation. For our third recommendation to establish a clearer link or crosswalk between agencies’ efforts under the real property initiative and broader capital planning guidance, OMB stated that as agencies update their asset management plans and incorporate updated guidance on capital planning, progressive improvement in this area will be realized. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Mark Goldstein on (202) 512-2834 or at goldsteinm@gao.gov. Key contributions to this testimony were made by Anne Izod, Susan Michal-Smith, and David Sausville. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In January 2003, GAO designated federal real property as a high-risk area due to long-standing problems with excess and underutilized property, deteriorating facilities, unreliable real property data, and costly space challenges. Federal agencies were also facing many challenges protecting their facilities due to the threat of terrorism. This testimony is based largely on GAO's April 2007 report on real property high-risk issues (GAO-07-349). The objectives of that report were to determine (1) what progress the administration and major real property-holding agencies had made in strategically managing real property and addressing long-standing problems and (2) what problems and obstacles, if any, remained to be addressed. The administration and real property-holding agencies have made progress toward strategically managing federal real property and addressing long-standing problems. In response to the President's Management Agenda real property initiative and a related executive order, agencies have, among other things, established asset management plans; standardized data reporting; and adopted performance measures. Also, the administration has created a Federal Real Property Council (FRPC) and plans to work with Congress to provide agencies with tools to better manage real property. These are positive steps, but underlying problems still exist. For example, the Departments of Energy (Energy) and Homeland Security (DHS) and the National Aeronautics and Space Administration (NASA) reported that over 10 percent of their facilities are excess or underutilized. Also, Energy, NASA, the General Services Administration (GSA), and the Departments of the Interior (Interior), State (State), and Veterans Affairs (VA) reported repair and maintenance backlogs for buildings and structures that total over $16 billion. The Department of Defense (DOD) reported a $57 billion restoration and modernization backlog. Also, Energy, Interior, GSA, State, and VA reported an increased reliance on leasing to meet space needs. While agencies have made progress in collecting and reporting standardized real property data, data reliability is still a challenge at DOD and other agencies, and agencies lack a standard framework for data validation. Finally, agencies reported using risk-based approaches to prioritize security needs, which GAO has suggested, but some cited obstacles such as a lack of resources for security enhancements. In past high-risk updates, GAO called for a transformation strategy to address the long-standing problems in this area. While the administration's approach is generally consistent with what GAO envisioned, certain areas warrant further attention. Specifically, problems are exacerbated by underlying obstacles that include competing stakeholder interests, legal and budgetary limitations, and the need for improved capital planning. For example, agencies cited local interests as barriers to disposing of excess property, and agencies' limited ability to pursue ownership leads them to lease property that may be more cost-effective to own over time.
By 1986, recruit quality was at historically high levels. All services had met or exceeded their overall enlistment objectives for percentages of recruits who held high school diplomas and scored in the top categories on the test taken to qualify for military service. Specifically, the percentage of recruits with high school diplomas increased from 72 percent during the 1964-73 draft period to 92 percent in 1986. Also, 64 percent of new recruits in 1986 scored in the upper 50th percentile of the Armed Forces Qualification Test, up from 38 percent in 1980. The services’ success in recruiting high quality enlistees continued through the 1980s and into the 1990s, with the percentage of high school graduates reaching a high of 99 percent in 1992 and the percentage of those scoring in the upper half of the Armed Forces Qualification Test peaking in 1991 at 75 percent. Studies of attrition have consistently shown that persons with high school diplomas and Armed Forces Qualification Test scores in the upper 50th percentile have lower first-term attrition rates. For example, for those who entered the services in fiscal year 1992 and had high school diplomas, the attrition rate was 33.1 percent. For persons with 3 or 4 years of high school and no diploma, the rate was 38.9 percent; and for those with General Education Development certificates, the attrition rate was 46.3 percent. Similarly, those who scored in the highest category, category I, of the Armed Forces Qualification Test had an attrition rate of 24.7 percent, and those in category IVA had a rate of 40.7 percent. Increases in the quality of DOD’s recruits since the 1970s, coupled with the lower attrition rates of those considered “high quality” recruits, logically should have resulted in lower first-term attrition rates throughout the services. However, factors other than education and Armed Forces Qualification Test scores appear to be influencing the early separation of recruits. First-term enlisted attrition has remained at 29 to 39 percent since 1974. For enlistees who entered the services in fiscal year 1992, first-term attrition was 33.2 percent. The Army’s attrition was the highest of all the services, at 35.9 percent, followed by the Marine Corps at 32.2 percent, the Navy at 32 percent, and the Air Force at 30 percent. The highest portion of attrition occurs during the early months of enlistees’ first terms. Of enlistees who entered the services in fiscal year 1992, 11.4 percent were separated in their first 6 months of service. Attrition was fairly evenly distributed over the remaining period of enlistees’ first terms. The rate was 3.4 percent for those with 7 to 12 months of service, 7.3 percent for those with 13 to 24 months of service, 6 percent for those with 25 to 36 months of service, and 5 percent for those with 37 to 48 months of service. On the basis of DOD-provided cost data, we estimated that in fiscal year 1996, DOD and the services spent about $390 million to enlist personnel who never made it to their first duty stations. Of this total cost, which includes the cost of DOD’s training and recruiting infrastructure, about $4,700 was spent to transport each recruit to basic training; to pay, feed, house, and provide medical care for the recruit while at basic training; and to transport the separated recruit home. We estimated that if the services could reduce their 6-month enlisted attrition by 10 percent, their short-term savings would be $12 million, and their long-term savings could be as high as $39 million. DOD and the services need a better understanding of the reasons for early attrition to identify opportunities for reducing it. Currently, available data on attrition does not permit DOD to pinpoint the precise reasons that enlistees are departing before completing their training. While the data indicates general categories of enlisted separations based on the official reasons for discharge, it does not provide DOD and the services with a full understanding of the factors contributing to the attrition. For example, of the 25,430 enlistees who entered the services in fiscal year 1994 and were discharged in their first 6 months, the data showed 7,248 (or 29 percent) had failed to meet minimum performance criteria, 6,819 (or 27 percent) were found medically unqualified for military service, 3,643 (or 14 percent) had character or behavior disorders, and 3,519 (or 14 percent) had fraudulently entered the military. These figures were based on data maintained by the Defense Manpower Data Center and collected from servicemembers’ DD-214 forms, which are their official certificates of release or discharge from active duty. Because the services interpret the separation codes that appear on the forms differently and because only the official reason for the discharge is listed, the Data Center’s statistics can be used only to indicate general categories of separation. Therefore, DOD does not have enough specific information to fully assess trends in attrition. In an attempt to standardize the services’ use of these codes, DOD issued a list of the codes with their definitions. However, it has not issued implementing guidance for interpreting these definitions, and the services’ own implementing guidance differs on several points. For example, if an enlistee intentionally withholds medical information that would disqualify him or her and is then separated for the same medical condition, the enlistee is discharged from the Air Force and the Marine Corps for a fraudulent enlistment. The Army categorizes this separation as a failure to meet medical/physical standards unless it can prove that the enlistee withheld medical information with the intent of gaining benefits. The Air Force and the Marine Corps do not require this proof of intent. The Navy categorizes this separation as an erroneous enlistment, which indicates no fault on the part of the enlistee. To enable DOD and the services to more completely analyze the reasons for attrition and to set appropriate targets for reducing it, we recommended that DOD issue implementing guidance for how the services should apply separation codes to provide a reliable database on reasons for attrition. In the absence of complete data on why first-term attrition is occurring, we examined the various pre-enlistment screening processes that correspond to the types of separations that were occurring frequently. For example, because a significant number of enlistees were being separated for medical problems and for fraudulent entry, we focused our work on recruiting and medical examining processes that were intended to detect problems before applicants are enlisted. These processes involve many different military personnel. Recruiters, staff members at the Military Entrance Processing Stations, drill instructors at basic training, instructors at follow-on technical training schools, and duty-station supervisors are all involved in transforming civilians into productive servicemembers. The process begins when the services first identify and select personnel to serve as recruiters. It continues when recruiters send applicants to receive their mental and physical examinations at the Military Entrance Processing Stations, through the period of up to 1 year while recruits remain in the Delayed Entry Program, and through the time recruits receive their basic and follow-on training and begin work in their first assignments. Reexamining the roles of all persons involved in this continuous process is in keeping with the intent of the Government Performance and Results Act of 1993, which requires agencies to clearly define their missions, to set goals, and to link activities and resources to those goals. Recruiting and retaining well-qualified military personnel is among the goals included in DOD’s strategic plan required under this act. As a part of this reexamination, we have found that recruiters did not have adequate incentives to ensure that their recruits were qualified and that the medical screening processes did not always identify persons with preexisting medical conditions. We believe that the services should not measure recruiting success simply by the number of recruits who sign enlistment papers stating their intention to join a military service but also by the number of new recruits who go on to complete basic training. We also believe that the services’ mechanisms for medically screening military applicants could be improved. We found that recruiters did not have adequate incentives to ensure that their recruits were qualified. Accordingly, we have identified practices in each service that we believe would enhance recruiters’ performance and could be expanded to other services. Specifically, in our 1998 report on military recruiting, we reported that the services were not optimizing the performance of their recruiters for the following reasons: The Air Force was the only service that required personnel experienced in recruiting to interview candidates for recruiter positions. In contrast, many Army and some Marine recruiting candidates were interviewed by personnel in their chain of command who did not necessarily have recruiting experience. The Navy was just beginning to change its recruiter selection procedures to resemble those of the Air Force. The Air Force was the only service that critically evaluated the potential of candidates to be successful recruiters by judging their ability to communicate effectively and by using a screening test. The Army, the Marine Corps, and the Navy tended to focus more on candidates’ past performance in nonrecruiting positions. Only the Marine Corps provided recruiter trainees with opportunities to interact with drill instructors and separating recruits to gain insight into ways to motivate recruits in the Delayed Entry Program. This interaction was facilitated by the Marine Corps’ collocation of the recruiter school with one of its basic training locations. Only the Marine Corps conducted regular physical fitness training for recruits who were waiting to go to basic training, though all of the services gave recruits in the Delayed Entry Program access to their physical fitness facilities and encouraged recruits to become or stay physically fit. Only the Marine Corps required all recruits to take a physical fitness test before reporting to basic training, though it is well known that recruits who are not physically fit are less likely to complete basic training. Only the Marine Corps’ and the Navy’s incentive systems rewarded recruiters when their recruits successfully completed basic training. The Army and the Air Force focused primarily on the number of recruits enlisted or the number who reported to basic training. Recruiters in all of the services generally worked long hours, were able to take very little leave, and were under almost constant pressure to achieve their assigned monthly goals. A 1996 DOD recruiter satisfaction survey indicated that recruiter success was at an all-time low, even though the number of working hours had increased to the highest point since 1989. For example, only 42 percent of the services’ recruiters who responded to the survey said that they had met assigned goals for 9 or more months in the previous 12-month period. To improve the selection of recruiters and enhance the retention of recruits, we recommended that the services (1) use experienced field recruiters to personally interview all potential recruiters, use communication skills as a key recruiter selection criterion, and develop or procure personality screening tests that can aid in the selection of recruiters; (2) emphasize the recruiter’s role in reducing attrition by providing opportunities for recruiter trainees to interact with drill instructors and separating recruits; (3) encourage the services to incorporate more structured physical fitness training for recruits into their Delayed Entry Programs; (4) conduct physical fitness tests before recruits report to basic training; (5) link recruiter rewards more closely to recruits’ successful completion of basic training; and (6) encourage the use of quarterly floating recruitment goals as an alternative to the services’ current systems of monthly goals. We have also found areas in which the medical screening of enlistees could be improved. Specifically, DOD’s medical screening processes did not always identify persons with preexisting medical conditions, and DOD and the services did not have empirical data on the cost-effectiveness of waivers or medical screening tests. In summary, the services did not have adequate mechanisms in place to increase the likelihood that the past medical histories of prospective recruits would be accurately reported; DOD’s system of capturing information on medical diagnoses did not allow it to track the success of recruits who received medical waivers; the responsibility for reviewing medical separation cases to determine whether medical conditions should have been detected at the Military Entrance Processing Stations resided with the Military Entrance Processing Command, the organization responsible for the medical examinations; and the Navy and the Marine Corps did not test applicants for drugs at the Military Entrance Processing Stations but waited until they arrived at basic training. To improve the medical screening process, we recommended that DOD (1) require all applicants for enlistment to provide the names of their medical insurers and providers and sign a release form allowing the services to obtain past medical information; (2) direct the services to revise their medical screening forms to ensure that medical questions for applicants are specific, unambiguous, and tied directly to the types of medical separations most common for recruits during basic and follow-on training; (3) use a newly proposed DOD database of medical diagnostic codes to determine whether adding medical screening tests to the examinations given at the Military Entrance Processing Stations and/or providing more thorough medical examinations to selected groups of applicants could cost-effectively reduce attrition at basic training; (4) place the responsibility for reviewing medical separation files, which resided with the Military Entrance Processing Command, with an organization completely outside the screening process; and (5) direct all services to test applicants for drugs at the Military Entrance Processing Stations. In its National Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85), the Congress adopted all recommendations contained in our 1997 report on basic training attrition, except for our recommendation that all the services test applicants for drug use at the Military Entrance Processing Stations, which the services had already begun to do. Specifically, the act directed DOD to, among other things, (1) strengthen recruiter incentive systems to thoroughly prescreen candidates for recruitment, (2) include as a measurement of recruiter performance the percentage of persons enlisted by a recruiter who complete initial combat training or basic training, (3) improve medical prescreening forms, (4) require an outside agency or contractor to annually assess the effectiveness of the Military Entrance Processing Command in identifying medical conditions in recruits, (5) take steps to encourage enlistees to participate in physical fitness activities while they are in the Delayed Entry Program, and (6) develop a database for analyzing attrition. The act also required the Secretary of Defense to (1) improve the system of pre-enlistment waivers and assess trends in the number and use of these waivers between 1991 and 1997; (2) ensure the prompt separation of recruits who are unable to successfully complete basic training; and (3) evaluate whether partnerships between recruiters and reserve components, or other innovative arrangements, could provide a pool of qualified personnel to assist in the conduct of physical training programs for new recruits in the Delayed Entry Program. DOD and the services have taken many actions in response to our recommendations and the requirements in the Fiscal Year 1998 Defense Authorization Act. However, we believe that it will be some time before DOD sees a corresponding drop in enlisted attrition rates, and we may not be able to precisely measure the effect of each particular action. While we believe that DOD’s and the services’ actions combined will result in better screening of incoming recruits, we also believe that further action is needed. As of January 1998, DOD reported that the following changes have been made in response to the recommendations in our 1997 report: (1) the Military Entrance Processing Command is formulating procedures to comply with the new requirement to obtain from military applicants the names of their medical insurers and health care providers; (2) the Accession Medical Standards Working Group has created a team to evaluate the Applicant Medical Prescreening Form (DD Form 2246); (3) DOD has adopted the policy of using codes from the International Classification of Diseases on all medical waivers and separations and plans to collect this information in a database that will permit a review of medical screening policies; (4) DOD plans to form a team made up of officials from the Office of the Assistant Secretary of Defense (Health Affairs) and the Office of Accession Policy to conduct semiannual reviews of medical separations; and (5) all services are now testing applicants for drugs at the Military Entrance Processing Stations. We believe that these actions should help to improve the medical screening of potential recruits and result in fewer medical separations during basic training. In its response to our 1998 report on recruiting, DOD stated that it concurred with our recommendations and would take action to (1) develop or procure assessment tests to aid in the selection of recruiters and (2) link recruiter rewards more closely to recruits’ successful completion of basic training. The Office of the Assistant Secretary of Defense for Force Management Policy is planning to work with the services to evaluate different assessment screening tests. This office will also ensure that all services incorporate recruits’ success in basic training to recruiter incentive systems. We understand that DOD plans to form a joint service working group to address the legislative requirements enacted in the National Defense Authorization Act for Fiscal Year 1998. Specifically, the working group will be tasked with devising a plan to satisfy the legislative requirements for DOD and the services to (1) improve the system of separation codes, (2) develop a reliable database for analyzing reasons for attrition, (3) adopt or strengthen incentives for recruiters to prescreen applicants, (4) assess recruiters’ performance in terms of the percentage of their enlistees who complete initial combat training or basic training, (5) assess trends in the number and use of waivers, and (6) implement policies and procedures to ensure the prompt separation of recruits who are unable to complete basic training. We believe that the steps DOD and the services have taken thus far could do much to reduce attrition. It appears that the soon-to-be-formed joint service working group can do more. As the group begins its work, we believe that it needs to address the following six areas in which further action is needed. First, we believe that DOD’s development of a database on medical separations is a necessary step to understanding the most prevalent reasons for attrition. However, we believe that DOD needs to develop a similar database on other types of separations. Until DOD has uniform and complete information on why recruits are being separated early, it will have no basis for determining how much it can reduce attrition. Also, in the absence of the standardized use of separation codes, cross-service comparisons cannot be made to identify beneficial practices in one service that might be adopted by other services. Second, we believe that all the services need to increase emphasis on the use of experienced recruiters to personally interview all potential recruiters or explore other options that would produce similar results. DOD agreed with the general intent of this recommendation but stated that it is not feasible in the Army due to the large number of men and women who are selected annually for recruiting duty and to the geographic diversity in their assignments. While it may be difficult for the Army to use field recruiters to interview 100 percent of its prospective recruiters, we continue to believe that senior, experienced recruiters have a better understanding of what is required for recruiting duty than operational commanders. Third, we believe that an ongoing dialogue between recruiters and drill instructors is critical to enhancing recruiters’ understanding of problems that lead to early attrition. DOD concurred with our recommendation to have recruiter trainees meet with drill instructors and recruits being separated or held back due to poor physical conditioning. However, the Air Force has no plans to change its policy of devoting only 1 hour of its recruiter training curriculum to a tour of its basic training facilities. We believe this limited training falls short of the intent of our recommendation. Fourth, we believe that the services should incorporate more structured physical fitness training into their Delayed Entry Programs. All the services are encouraging their recruits to become physically fit, but there are concerns about the services’ liability should recruits be injured while they are awaiting basic training. DOD is currently investigating the extent to which medical care can be provided for recruits who are injured while in the Delayed Entry Program. Fifth, we believe that, like the Marine Corps, the other services should administer a physical fitness test to recruits before they are sent to basic training. DOD concurred with this recommendation, and the Army is in the process of implementing it. The Navy and the Air Force, however, do not yet have plans to administer a physical fitness test to recruits in the Delayed Entry Program. Finally, we continue to believe that the services need to use quarterly floating goals for their recruiters. DOD did not fully concur with our recommendation on quarterly floating goals. DOD believes that floating quarterly goals would reduce the services’ ability to make corrections to recruiting difficulties before they become unmanageable. We believe, however, that using floating quarterly goals would not prevent the services from managing their accessions. The floating quarterly goals we propose would not be static. Each recruiter’s goals would simply be calculated based on a moving 3-month period. This floating goal would continue to provide recruiting commands with the ability to identify recruiting shortfalls in the first month that they occur and to control the flow of new recruits into the system on a monthly basis. At the same time, such a system has the potential of providing recruiters with some relief from the problems that were identified in the most recent recruiter satisfaction survey. Mr. Chairman, this concludes my prepared statement. We would be happy to respond to any questions that you or the other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its work on the attrition and recruiting of the military services' enlisted personnel, focusing on: (1) the historical problem of attrition and its costs; (2) the Department of Defense's (DOD) lack of complete data on why enlistees are being separated early; (3) GAO's recommendations on ways to improve the screening of recruiters and recruits; and (4) DOD's actions thus far to respond to GAO's recommendations. GAO noted that: (1) despite increases in the quality of DOD's enlistees, about one-third of all new recruits continue to leave the military service before they fulfill their first term of enlistment; (2) this attrition rate is costly in that the services must maintain infrastructure to recruit and train around 200,000 persons per year; (3) solving the problem of attrition will not be simple in large part because DOD does not have complete data on why enlisted personnel are being separated; (4) in GAO's work, it has concentrated on what it has found to be major categories of separation, such as medical problems and fraudulent enlistments; (5) because these types of separations involve services' entire screening processes, GAO has reexamined these processes from the time recruiters are selected, through the time that applicants are prescreened by recruiters, through the medical examinations applicants undergo, and through physical preparation of recruits for basic training; (6) the process of attracting quality recruits and retaining them involves many service entities and many processes; (7) GAO has recommended ways to improve the: (a) data DOD collects to analyze reasons for attrition; (b) services' criteria for selecting recruiters; (c) incentive systems for recruiters to enlist persons who will complete basic training; and (d) services' mechanisms for identifying medical problems before recruits are enlisted; (8) many of these recommendations have been incorporated into the National Defense Authorization Act for Fiscal Year 1998; (9) DOD and the services have already taken some positive steps in response to GAO's recommendations and the National Defense Authorization Act; and (10) however, GAO believes that DOD needs to take further action to change the criteria by which recruiters are selected, provide recruiters with more opportunities to interact with drill instructors, and revise recruiters' incentive systems to improve their quality of life.
Behavioral health conditions—including those related to mental health and substance use—affect a substantial number of adults in the United States. The Substance Abuse and Mental Health Services Administration (SAMHSA) estimated that in 2015 about 43 million adults (18 percent) had a mental health condition—including about 10 million adults (4 percent) with a serious mental illness—and about 20 million (8 percent) had a substance use condition. Examples of common mental health conditions include anxiety disorders, such as phobias and post-traumatic stress disorder, and mood disorders, such as depression and bipolar disorder. Examples of common substance use conditions include alcohol use disorder and opioid use disorder. There is substantial overlap between individuals with mental health and substance use conditions; about 8 million adults had both types of conditions, also referred to as co- occurring conditions. Individuals with behavioral health conditions also experience higher rates of physical health conditions. Low-income individuals, such as those enrolled in Medicaid, are at greater risk for developing behavioral health conditions. In 2015, a greater percentage of individuals covered by Medicaid experienced mental health conditions and co-occurring conditions than individuals with private insurance. Treatment for behavioral health conditions can help individuals reduce their symptoms, improve their ability to function, and avoid the potential consequences of untreated conditions, such as worsening health, reduced educational attainment, loss of employment, and involvement with the justice system. Treatment for behavioral health conditions can include behavioral health services, prescription drugs, or a combination of both. Behavioral health services include diagnostic services, which involve the collection and evaluation of information to determine the nature and extent of behavioral health problems, and psychosocial therapies, such as psychotherapy. Psychotherapy—also referred to as counseling or “talk therapy”—typically involves regular visits with a provider focused on helping individuals understand, reduce, and manage their symptoms. Prescription drugs may also be used to treat both mental health and substance use conditions. SAMHSA estimated that in 2015, more adults used a mental health medication (12 percent) than received outpatient mental health treatment (7 percent). A common type of drug used to treat mental health conditions is antidepressants, which treat depression as well as other conditions, such as anxiety. For certain substance use conditions, individuals may receive medication-assisted treatment (MAT), which involves the use of medications in conjunction with behavioral health services, such as psychotherapy. According to SAMHSA, the use of medications like methadone, buprenorphine, and naltrexone for individuals with opioid use disorders can help them more fully engage in their recovery. One potential barrier to accessing treatment is a shortage of qualified behavioral health professionals, particularly in rural areas. According to the Health Resources and Services Administration, there were more than 4,500 mental health professional shortage areas in the United States as of April 2017, containing about a third of the American population (about 109 million people). Over half of these shortage areas were in rural or partially rural locations. We previously reported that states were taking a number of steps to address behavioral health workforce shortages, such as providing Medicaid reimbursement for telehealth services. Telehealth services allow a patient in a rural location to interact with a medical provider through interactive video conferencing. Research has suggested that telehealth services are particularly effective for specialties such as mental health that involve mostly verbal interaction rather than physical examination. CMS and states jointly fund and administer the Medicaid program, and states have flexibility within broad federal parameters for designing and implementing their Medicaid programs. For example, state Medicaid programs must cover certain mandatory populations and benefits, but states may choose to also cover other optional populations and benefits. Traditionally, Medicaid did not require states to include behavioral health services in their Medicaid programs; however, all state Medicaid programs provided some behavioral health services. Likewise, states were not required to include coverage for prescription drugs in their Medicaid programs, but all states did. Under PPACA, most expansion enrollees must be covered under an alternative benefit plan, which must cover 10 essential health benefits categories. Mental health and substance use services, including behavioral health treatment, and prescription drugs are 2 of the 10 essential health benefits categories. Medicaid is the largest source of funding for behavioral health treatment in the nation, with spending estimated at about $53 billion for 2014. Prior to 2014—when states had the option to expand Medicaid to all adults up to 138 percent of the FPL—states had varying levels of coverage available for low-income, uninsured adults. For example, the four states we selected had the following coverage available. Iowa had coverage available for low-income adults up to 200 percent of the FPL under a Medicaid waiver, but coverage did not include behavioral health treatment. New York provided Medicaid benefits to low-income, childless adults up to 100 percent of the FPL. Enrollees with incomes up to about 78 percent of the FPL were served through traditional Medicaid. Enrollees above this income level and up to 100 percent of the FPL were covered under New York’s Family Health Plus program, which was implemented in 2001 through a Medicaid waiver. Washington expanded Medicaid as of January 3, 2011, as part of PPACA’s early expansion option. Although the state covered enrollees up to 138 percent of the FPL, enrollment was limited to around 41,000 individuals who were previously enrolled in Basic Health, a state-funded health coverage program for adults up to 200 percent of the FPL with capped enrollment. West Virginia did not have Medicaid coverage for low-income, childless adults prior to its Medicaid expansion in 2014. As a result, the extent to which expansion enrollees were newly eligible for Medicaid coverage in 2014 varied among our selected states. For example, while most Medicaid expansion enrollees in New York were previously eligible for coverage under the state’s pre-PPACA Medicaid program, all expansion enrollees in West Virginia were newly eligible. See figure 1 for information on the size of each state’s Medicaid expansion population, the percent who were newly eligible, as well as other state characteristics. States that expanded Medicaid could choose different delivery systems to provide benefits to expansion enrollees, such as fee-for-service or managed care. Under a fee-for-service model, states pay providers for each covered service for which the providers bill the state. Under a managed care model, states contract with managed care organizations to provide or arrange for medical services, and prospectively pay the plans a fixed monthly fee per enrollee. States that provide Medicaid benefits through managed care may contract with separate companies to manage medical and behavioral health benefits, often referred to as “carving out” behavioral health benefits. See table 1 for information on Medicaid coverage of physical and behavioral health benefits for expansion enrollees in our selected states for 2014. Although Medicaid is the largest source of funding for behavioral health treatment in the nation, states have historically also had a large role in funding behavioral health services through programs other than Medicaid, especially for low-income, uninsured adults. In addition, states may use SAMHSA-administered mental health and substance use block grants to design and support a variety of treatments for individuals with behavioral health conditions. As we previously reported, some states that did not expand Medicaid provided behavioral health treatment to priority populations to focus care on adults with the most serious conditions and used waitlists for those with more modest behavioral health needs. We also reported that the Medicaid expansion states we examined generally reported an increase in the availability of behavioral health treatment for previously uninsured low-income adults who enrolled in Medicaid, particularly in states that had no prior coverage available for this population. Across our four selected states in 2014, from 17 to 25 percent of expansion enrollees were diagnosed with a behavioral health condition. Diagnoses of mental health conditions were more common than diagnoses of substance use conditions. The distribution of expansion enrollees with a behavioral health diagnosis by gender and age was generally similar across states. Behavioral health diagnoses among expansion enrollees ranged from 17 to 25 percent across our selected states in 2014, with mental health conditions being more common than substance use conditions. From 11 to 20 percent of expansion enrollees were diagnosed with a mental health condition, compared with 6 to 8 percent diagnosed with a substance use condition. (See table 2.) However, patterns of specific mental health and substance use diagnoses were similar across selected states. The most common mental health condition categories were mood disorders, such as depression, and anxiety disorders, such as panic disorder. Among expansion enrollees diagnosed with a substance use condition, a greater percentage were diagnosed with a substance-related condition, such as cocaine dependence, compared with alcohol-related conditions. From 1 to 3 percent of all expansion enrollees were diagnosed with opioid abuse or dependence, a subset of substance-related conditions. (See app. III for more information on this group of enrollees.) The distribution of expansion enrollees with diagnosed behavioral health conditions by age and gender was generally similar across the selected states. Expansion enrollees with diagnosed behavioral health conditions were fairly evenly divided among age groups across all selected states. (See fig. 2.) Women accounted for a larger percentage of enrollees with diagnosed behavioral health conditions in three of the four selected states—Iowa, Washington, and West Virginia. In New York, men accounted for 58 percent of expansion enrollees with diagnosed behavioral health conditions. Geographic location of enrollees in selected states varied, with some states having a greater proportion of rural enrollees. The geographic location of expansion enrollees diagnosed with behavioral health conditions was consistent with the more general urban/rural distribution of residents in these states. State officials discussed efforts to meet the behavioral health needs of rural residents, who may have difficulty accessing care, because of the need to travel long distances to access relatively fewer providers. For example, officials in Iowa, Washington, and West Virginia discussed the important role of telehealth services in allowing rural residents to access care for behavioral health conditions. Officials in Iowa also noted that the state has provided funding to help rural communities establish the infrastructure needed to host psychiatric telehealth appointments. In West Virginia, as of July 1, 2014, 85 percent of procedure codes in the Medicaid program were eligible for reimbursement when provided via telehealth. West Virginia officials also emphasized the role of Federally Qualified Health Centers, which can provide a “one-stop shop” for both medical and behavioral health treatment for residents in rural areas who have to travel long distances to access care. Use of behavioral health treatment—services and drugs to address mental health and substance use conditions—ranged from 20 to 34 percent in selected states in 2014. Among expansion enrollees who used a behavioral health service, the two most commonly used service categories were psychotherapy services and diagnostic services. Antidepressants were the most commonly used category among expansion enrollees who used a behavioral health drug. Use of behavioral health treatment ranged from 20 percent of expansion enrollees in New York to 34 percent in Iowa. (See table 3.) These rates exceeded the rates of diagnosed conditions presented above, in part, because prescription drugs are not recorded with diagnosis codes. Thus, enrollees who only used behavioral health prescription drugs—and no outpatient services—were not counted in the diagnosis totals. Rates of behavioral health prescription drug use were higher than the use of services across the four selected states. The higher rates of prescription drug use suggest that some enrollees received drugs without also receiving behavioral health services. Officials from one state commented that this may be appropriate for some conditions, such as mild depression, where a prescription drug may be adequate without accompanying counseling. In addition, some enrollees may have received evaluation and management services, which may have included treatment for behavioral health conditions, but which are not included in our measure of behavioral health treatment. Officials we spoke with from three of the selected states told us that expansion enrollees likely had greater access to behavioral health treatment after enrolling in Medicaid. Iowa officials noted that some county-based mental health agencies, which were responsible for serving uninsured residents as of 2014, had waiting lists for mental health services prior to the state expanding Medicaid. Washington officials said that Medicaid expansion had resulted in a significant increase in access to services for enrollees, particularly for less acute, community-based services for people who needed ongoing therapy. Officials explained that uninsured residents not eligible for Medicaid would generally rely on the state’s Regional Support Networks—managed care entities responsible for providing mental health services for uninsured residents—generally provided crisis services, or services for individuals with serious and persistent mental illnesses. Officials also noted that the expansion had resulted in more consistent access to behavioral health prescription drugs, because Medicaid covers such prescriptions with no copayment. Uninsured residents, according to the officials, would have been limited to charity programs from drug manufacturers or block-grant- funded prescriptions, neither of which consistently funds medications for everyone who needs them. Officials noted that consistent access to medications can make a big difference for individuals whose conditions are stable on medications, but unstable off medications. West Virginia officials said that access to behavioral health prescription drugs, particularly MAT for substance use conditions, increased for Medicaid expansion enrollees. West Virginia’s charity care program for uninsured residents does not pay for behavioral health prescription drugs. Officials said that some uninsured residents may have relied on family members or may have sold personal belongings to afford their medications prior to Medicaid expansion. By contrast, there was less of a change for expansion enrollees in New York. Due to New York’s Medicaid waiver program, which covered low- income childless adults up to 100 percent of the FPL, most Medicaid expansion enrollees in New York were already eligible for Medicaid prior to 2014. One state official said that these enrollees would not have experienced a change in access to treatment, because New York’s expansion coverage was modeled on its existing Medicaid coverage. The official said that newly eligible enrollees who were previously uninsured would have had access to state-licensed and funded behavioral health programs prior to enrollment. However, the official said that New York did not generally pay for behavioral health prescription drugs for uninsured individuals. Among the 9 to 16 percent of expansion enrollees who used a behavioral health service in the four selected states, the two most commonly used service categories were psychotherapy—regular visits with a provider to help a patient understand, reduce, and manage symptoms—and diagnostic services. Diagnostic services involve sessions with a provider designed to collect information to determine whether a patient has a behavioral health condition and to make a diagnosis, if appropriate. (See fig. 3.) In New York, substance-use-specific services were almost as common as diagnostic services. New York officials noted that the state has a more extensive array of specialty substance use services available than other states. For example, New York covers methadone administration through Medicaid, whereas West Virginia does not. Diagnostic services may have been less used in New York than in other states, because most of the expansion enrollees were not newly eligible; consequently, enrollees with behavioral health needs may have already been seen by a Medicaid provider and received a diagnosis prior to 2014. We also examined use of evaluation and management services—more general medical visits with a physician or other medical provider—and found that 8 to 17 percent of expansion enrollees used this type of service. Although, by definition, evaluation and management services may address a wide range of physical or behavioral health conditions, we examined these services because some individuals may have received behavioral health treatment during these visits, including services provided by a psychiatrist. An evaluation and management visit with a psychiatrist, for example, may include prescribing or monitoring the effects of behavioral health prescription drugs. Evaluation and management services also encompass services provided by primary care physicians, who are often the first point of contact for individuals with conditions like depression. Our examination of emergency room use, which involved comparing rates of use among expansion enrollees with and without behavioral health diagnoses, found that up to 3 times as many enrollees with a behavioral health diagnosis had an emergency room visit compared to enrollees without such a diagnosis. From 42 to 57 percent of individuals with a behavioral health condition had an emergency room visit, compared with 13 to 32 percent of individuals without a behavioral health condition. (See table 4.) Most emergency room visits among enrollees with behavioral health conditions were not primarily for a behavioral health condition (81 to 92 percent across selected states). Our finding that emergency room use is more common among enrollees with behavioral health conditions is consistent with previous research, including research showing that Medicaid enrollees with a behavioral health condition typically have more complex health needs, including comorbid physical health conditions. Among expansion enrollees who used a behavioral health drug, antidepressants were the most commonly used category, and patterns of use by drug category were similar across our four selected states. From 67 to 78 percent of expansion enrollees who used a behavioral health drug took an antidepressant. Anti-anxiety medications, anticonvulsants, and antipsychotics were the next most commonly used categories, respectively, in three of the four selected states. (See fig. 4.) Together, these four drug categories accounted for upwards of 80 percent of total prescriptions in each state. The fifth most common drug category varied by state and included sedative/hypnotic medications in Iowa, smoking cessation medications in New York, and attention-deficit/hyperactivity disorder (ADHD) medications in West Virginia. Use of behavioral health prescription drugs was greater among women and enrollees aged 30 and over. (See table 5.) In all four selected states, a greater percentage of women received behavioral health prescription drugs than did men, ranging from 2 percentage points greater in New York to 13 percentage points greater in West Virginia. Use of behavioral health drugs was 8 to 12 percentage points greater among enrollees aged 30 and above compared with enrollees aged 19 to 29 across the four states. Clinical experts from QuintilesIMS noted that women in general have greater rates of prescription drug use, including non- behavioral-health drugs, and are also generally more likely to seek medical care than men. They also noted that although many behavioral health conditions first occur in late adolescence and early adulthood, there is a time lag between development of symptoms and treatment that may partially explain why more individuals aged 30 and over received drug treatment. Patterns of behavioral health prescription drug use by category also varied by gender and age in selected states. Gender: Among the drug categories with the biggest gender differences were antidepressants, used by more women than men, and antipsychotics, used by more men than women. Clinical experts from QuintilesIMS noted that the prevalence of depression is significantly higher in women than in men; in men there are more concerns about agitation and behavior management, which can be treated with antipsychotics. Age: Among the drug categories with the biggest age differences were ADHD medications, which had greater use among 19 to 29 year olds compared with enrollees aged 30 and older, and sedative/hypnotic medications, used by more enrollees aged 30 and over compared with enrollees aged 19 to 29. QuintilesIMS clinical experts said that the prevalence of ADHD declines rapidly in the late teen years, which may explain the lower use of these drugs in the older age group. The sedative/hypnotic drug category includes drugs that address insomnia, which is a condition that increases with age; this may partially explain the higher use of these drugs in the 30 and over age group. Over a quarter of expansion enrollees in the selected states who took a behavioral health prescription drug in 2014 took drugs from three or more different drug categories. From 25 to 31 percent of enrollees in selected states who took a behavioral health drug took drugs from three or more drug categories, and 4 to 6 percent took drugs from five or more categories. These enrollees may have used drugs from multiple categories at the same time to treat their conditions (concomitant use), or they may have filled prescriptions for these drugs at different points in time during 2014. QuintilesIMS clinical experts noted that a common example of concomitant use is taking both an antidepressant and a sleep medication, which are often prescribed together during initial treatment for depression. Regarding the use of different categories of drugs over time, experts said that sometimes when a patient does not improve after taking a drug, a physician may switch the patient to a drug from a different category. For example, a patient with bipolar disorder who is initially treated with an anticonvulsant may be switched to an antipsychotic if symptoms do not adequately resolve. We also found that more enrollees aged 30 and over used drugs from three or more categories than enrollees aged 19 to 29, which our clinical experts said could be partly because some behavioral health conditions become more difficult to treat with age, which may result in more drugs being prescribed. We provided a draft of this report to the Department of Health and Human Services (HHS) for review. HHS provided technical comments, which we incorporated as appropriate. As discussed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. To describe the population of Medicaid expansion enrollees with behavioral health diagnoses and their use of behavioral health treatment in 2014 in selected states, we analyzed enrollment, service utilization, and prescription drug data from the Medicaid Statistical Information System (MSIS) for calendar year 2014 for selected states. Our analysis consisted of the following three steps: (1) state selection, including assessing the reliability and usability of MSIS data; (2) enrollee, service, and drug identification; and (3) utilization analysis. We selected four states: Iowa, New York, Washington, and West Virginia. These were the only states that met the following criteria as of January 2016. The four selected states 1. were among the 25 states that expanded Medicaid as allowed under the Patient Protection and Affordable Care Act (PPACA) as of January 1, 2014; 2. had enrollment and utilization data for expansion enrollees in MSIS for all of calendar year 2014 that were sufficiently reliable for the purposes of our reporting objectives; and 3. had available information and documentation on Medicaid behavioral health benefits, and on how enrollment, service utilization, and prescription drug data were recorded for expansion enrollees. There were eight expansion states with data for all of calendar year 2014 in MSIS as of January 2016. In addition to our selected states, we reviewed information from Arkansas, Connecticut, New Jersey, and Vermont, but ultimately did not select these states for review for the following reasons. Arkansas was not selected because it implemented its expansion through premium assistance, also known as the private option, whereby the state pays premiums to purchase private insurance for enrollees through a state or federal exchange. According to the Centers for Medicare & Medicaid Services (CMS), states are not required to submit data on the utilization of services or drugs for enrollees who receive premium assistance; consequently, no data were available for our utilization analyses. Connecticut was not selected because the enrollment data in MSIS for 2014 were not reliable enough for the purpose of identifying its population of expansion enrollees. CMS officials told us that Connecticut had entered enrollment data incorrectly for the expansion population in 2014. New Jersey was not selected due to the lack of available information on behavioral health benefits and how its data were recorded in MSIS. Vermont was not selected because its Medicaid program had coverage for adults with incomes up to 150 percent of the federal poverty level (FPL) that pre-dated the enactment of PPACA. Iowa and New York were also missing key information in MSIS that we needed to identify all expansion enrollees, but we conducted our analyses after receiving the necessary data from the states directly. We limited our analysis of Iowa’s data to individuals with incomes at or below 100 percent of the FPL, because individuals with higher incomes were served through premium assistance as of 2014; consequently, there were no utilization data available for them. Our selected states are not representative of all expansion states and their Medicaid programs. In addition, a number of state-specific factors—such as differences in population health status and provider supply—could contribute to variation across our selected states, but attributing this variation to such factors was beyond the scope of this study. We assessed the reliability and usability of MSIS data for our purposes by interviewing knowledgeable federal and state officials; reviewing related documentation, such as studies that assessed the reliability of Medicaid data; comparing the results of our analysis of expansion enrollment to published enrollment figures from CMS; and testing the data for logical errors and missing information. Based on our assessment, we excluded data from Iowa for months in which expansion enrollees were served under comprehensive managed care, because the results of our reliability testing suggested missing data. This resulted in the exclusion of data from about 17 percent of expansion enrollees. We excluded data from Washington for months in which expansion enrollees were served under fee-for-service arrangements, because of missing diagnosis codes, which were needed for our analysis. This resulted in the exclusion of data from about 2 percent of expansion enrollees. Following these exclusions, we determined the data were sufficiently reliable for the purposes of our reporting objectives. Based on enrollment information in MSIS, supplemented by state- provided information from Iowa and New York, we restricted our analysis to nonpregnant adults aged 19-64 who were not eligible for Medicare and whose income did not exceed 138 percent of the FPL, i.e., the “new adult group” under Section 1902(a)(10)(A)(i)(Vlll) of the Social Security Act. We included both newly eligible and not newly eligible expansion enrollees who were enrolled for at least one month in calendar year 2014 (i.e., ever-enrolled). We excluded the following individuals, who represented 3 percent or less of expansion enrollees across the four states: 1. Individuals who did not appear to be eligible for Medicaid expansion, because they were recorded as dually eligible for Medicare and Medicaid; were younger than 19 years of age as of December 31, 2014; or were older than 64 years of age as of January 1, 2014; and 2. Individuals with multiple dates of birth, multiple values for gender; or multiple values for MSIS identification number. To describe the population of Medicaid expansion enrollees with behavioral health diagnoses in selected states in 2014, we analyzed enrollment and service utilization data in MSIS for each state. We considered an enrollee to have a diagnosed behavioral health condition if that enrollee received any outpatient services with a recorded diagnosis code for a behavioral health condition in 2014. The presence of such diagnosis codes on claims does not necessarily indicate that a clinical interview was conducted. In addition, because we measured behavioral health conditions based on outpatient service utilization data, our estimates do not include individuals with conditions who did not use outpatient services during 2014—such as individuals who used no services or who only used inpatient services—or those who used only behavioral health prescription drugs. Using the Agency for Healthcare Research and Quality’s Clinical Classifications Software groupings, we further categorized mental health conditions into 11 categories: 3. attention-deficit, conduct, and disruptive behavior disorders; 4. delirium, dementia, and amnestic and other cognitive disorders; 6. disorders usually diagnosed in infancy, childhood, or adolescence; 7. impulse control disorders not elsewhere classified; 10. schizophrenia and other psychotic disorders; and 11. miscellaneous mental health disorders. We further categorized substance use disorders into substance-related (i.e., addiction to drugs like cocaine or heroin) and alcohol-related disorders. Among the substance-related conditions, we also identified opioid abuse and dependence as a unique category, and we selected these codes based on prior research on opioid treatment use in Medicaid. We considered enrollees to have a behavioral health condition if they had any diagnosis code within our selected range. Substance use conditions were all diagnosis codes within the Agency for Healthcare Research and Quality’s substance-related and alcohol-related disorders categories, including opioid abuse and dependence, but excluding tobacco use disorder. While we considered tobacco use disorder to be a behavioral health condition, we did not consider it to be a substance use condition, which is consistent with how the Substance Abuse and Mental Health Services Administration collects and reports data on substance use. For the group of enrollees with a behavioral health condition, we used enrollment data to describe their characteristics; specifically, we examined age, gender, and geographic location. We measured enrollees’ age based on date of birth and latest month of enrollment in 2014. Gender was determined as recorded in the relevant MSIS data field. We defined geographic location based on enrollees’ zip code of residence using the most recent available rural-urban commuting area codes from the Department of Health and Human Services and the Department of Agriculture. The set of rural-urban commuting area codes has 10 tiers along the spectrum of rurality, each of which is further broken down into secondary codes. We used the four-tiered data consolidation recommended for analysis by the Washington State Department of Health. To describe the use of behavioral health services among Medicaid expansion enrollees in selected states in 2014, we selected a set of behavioral health services for each state based on each state’s coverage and utilization of services in 2014. We defined behavioral health services as outpatient screening, assessment, diagnostic, treatment, rehabilitation, and habilitation services used primarily or exclusively to evaluate and address the needs of individuals with behavioral health conditions. To identify behavioral health services, we reviewed the following for each state: (1) Medicaid provider manuals and other coverage documentation that contained the procedure codes and descriptions for covered services, and (2) a list of all services that were provided to expansion enrollees in calendar year 2014 that were recorded with a primary diagnosis of a behavioral health condition. We selected codes from these two sources that we determined to be behavioral-health-specific based on their descriptions. We gave each state the opportunity to review and comment on the list of services selected for analysis, and made revisions as appropriate based on their input. To further examine behavioral health service use by category, we reviewed the top 25 most-used services (by number of enrollees who used the service at least once) for each state. Based on service descriptions, we divided them into the following service categories: diagnostic services, psychotherapy services, rehabilitation and habilitation services, substance-use-specific services, and other services. To more fully examine service utilization patterns among expansion enrollees, we also examined evaluation and management services— more general medical visits with a physician or other medical provider— because some individuals may have received behavioral health treatment during these visits, including services provided by a psychiatrist. We limited our analysis of evaluation and management services to those visits recorded with a primary diagnosis of a behavioral health condition. However, because of the uncertainty of the extent to which behavioral health treatment was provided as part of evaluation and management services, we do not count them as behavioral health services, or include them in our overall definition of behavioral health treatment. We also examined outpatient emergency room visits—for any condition, not just a behavioral health condition—among expansion enrollees with and without a behavioral health diagnosis. Emergency room visits were of interest, because prior research has suggested that individuals who have a behavioral health condition may access emergency care more frequently than those without such conditions. For both behavioral health and evaluation and management services, we accounted for the possibility of duplicate claims or encounters by restricting our analysis to a single claim or encounter for the same service for the same patient on the same day, and by counting services with add- on codes as a single service. We excluded all inpatient and laboratory services from our analysis. In addition, because our analysis was limited to Medicaid claims and encounters, our results do not reflect the use of services not paid for by Medicaid, such as state- or grant-funded services. To examine behavioral health prescription drug use, we examined both filled prescriptions and services that included physician administration of drugs. We defined behavioral health prescription drugs as Food and Drug Administration (FDA)-approved drugs used, on- or off-label, to treat adults with behavioral health conditions in the United States as of 2014. We excluded drugs used to treat the side effects of other behavioral health prescription drugs, such as drugs for diabetes that may be used to address the metabolic effects of antipsychotic drugs. To identify behavioral health prescription drugs, we worked with a contractor— QuintilesIMS—that developed a list of behavioral health drugs based on drug reference information (i.e., how drugs are classified), survey data on prescribing patterns, and expert clinical opinion. From among drugs that were classified as psychotherapeutic or were identified as being prescribed to treat a behavioral health condition based on survey data, QuintilesIMS identified those drugs and drug categories that are primarily or exclusively used to treat behavioral health conditions. We counted these drugs as behavioral health prescription drugs whenever an expansion enrollee filled a prescription for them. QuintilesIMS also identified drugs that may be used for behavioral health purposes, but are also used to treat non-behavioral-health conditions. For this group of drugs, we used information about the characteristics of the drug, such as dose, form, or route of administration, if applicable, to identify whether they were likely to have been used for a behavioral health purpose. For drugs without characteristics that distinguished their use, we counted them as a behavioral health drug only when an individual had an outpatient service some time in 2014 that was recorded with a related behavioral health diagnosis (i.e., a behavioral condition that survey data identified as one a prescriber intended to treat by prescribing that drug.) As a result, our analysis does not account for the use of these drugs by individuals who did not have an outpatient service in 2014 that included a relevant diagnosis code. See appendix I for the list of behavioral health drugs we included in our analyses. Based on our prior work and consultation with the contractor, we categorized the 126 behavioral health drugs on our list into 12 categories: 4. Antidepressant combination medications 7. Attention-deficit/hyperactivity disorder medications 9. Sexual function disorder medications 10. Smoking cessation medications 11. Substance use disorder medications As part of our analysis of individuals with opioid abuse and dependence, we looked at the use of drugs used for medication-assisted treatment (MAT). We defined MAT drugs as drugs that are FDA-approved to treat opioid use disorder: methadone, buprenorphine, buprenorphine/naloxone, and naltrexone. For the purposes of our analysis, we counted methadone administration as a service rather than a prescription drug. We conducted interviews with officials from our four selected states to discuss behavioral health benefits for Medicaid expansion enrollees; how enrollment, service utilization, and prescription drug data were recorded in MSIS; officials’ perspectives on the results of our analysis; and whether Medicaid expansion affected the availability of behavioral health treatment for expansion enrollees, relative to what was available for low- income, uninsured adults prior to the first year of expansion in 2014. We also interviewed a physician group specializing in addiction medicine and consulted with clinical experts from QuintilesIMS for additional perspectives on our results. We conducted our performance audit from November 2015 through June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In light of increased interest among policy makers in addressing the negative effects of opioid addiction, we focused a portion of our analyses on the subset of expansion enrollees diagnosed with opioid abuse or dependence. We analyzed Medicaid enrollment, service utilization, and prescription drug data for 2014 to determine the characteristics of expansion enrollees with diagnosed opioid abuse or dependence, including gender, age, and geographic location, as well as the extent to which these enrollees accessed medication-assisted treatment (MAT) or received outpatient services. We also considered the extent to which expansion enrollees diagnosed with opioid abuse or dependence received prescriptions for opioid pain medications following their diagnosis. While opioid pain medication can constitute proper medical care for enrollees suffering from painful conditions, their use among enrollees with previously diagnosed opioid abuse or dependence also raises concerns about potential inappropriate prescribing. Characteristics of expansion enrollees diagnosed with opioid abuse or dependence were generally similar across our four selected states. Across all the selected states, men represented a greater proportion of enrollees diagnosed with opioid abuse or dependence than women. (See fig. 5.) This is especially the case in New York, where 71 percent of expansion enrollees diagnosed with opioid abuse or dependence were men. Across all four selected states, those aged 19-29 and 30-39 accounted for a greater proportion of enrollees diagnosed with opioid abuse or dependence compared with older enrollees. The geographic location of enrollees diagnosed with opioid abuse or dependence varied based on state demographics more generally. Large proportions of expansion enrollees diagnosed with opioid abuse or dependence utilized outpatient services, while use of MAT was lower and varied greatly across our selected states. Outpatient services in our analysis include behavioral health services such as diagnostic services and psychotherapy, as well as evaluation and management services recorded with a primary behavioral health diagnosis. From 62 to 81 percent of expansion enrollees in the selected states diagnosed with opioid abuse or dependence received an outpatient service for a behavioral health condition in 2014. (See table 6.) Expansion enrollees with diagnosed opioid abuse or dependence received MAT at divergent rates among the selected states, from 11 to 41 percent. A physician group we interviewed noted that while not every enrollee diagnosed with opioid addiction is a candidate for MAT, they would like to see all patients diagnosed with opioid addiction offered MAT as an option. We previously reported that factors that affect patients’ access to MAT for opioid addiction include laws and regulations, the availability of qualified practitioners and their capacity to meet patient demand for MAT, and perceptions of MAT and its value among patients, practitioners, and institutions. Recent federal guidance and state actions seek to connect Medicaid enrollees diagnosed with opioid abuse or dependence to treatment. In January 2016, the Centers for Medicare & Medicaid Services released an informational bulletin outlining best practices for addressing prescription opioid overdose, misuse, and addiction, which recommends expanding the use of MAT. In interviews, state officials discussed ongoing efforts to ensure that enrollees diagnosed with opioid abuse or dependence receive appropriate treatment. Many of these efforts seek to increase the number of providers that can prescribe drugs for MAT. For example, Iowa is using a Certified Community Behavioral Health Clinics planning grant from the Substance Abuse and Mental Health Services Administration to train providers and increase the number of providers that can prescribe buprenorphine, a drug used for MAT. In Washington, officials discussed efforts to recruit primary care physicians to prescribe buprenorphine, which would make MAT more accessible to enrollees living in rural areas who might have to travel great distances to receive MAT from a clinic. The use of opioid pain medications was generally higher among the 1 to 3 percent of expansion enrollees diagnosed with opioid abuse or dependence compared with all other expansion enrollees. From 24 to 48 percent of expansion enrollees diagnosed with opioid abuse or dependence in selected states were prescribed opioid pain medication following their diagnosis, compared with 14 to 35 percent of all other expansion enrollees. (See table 7.) Authors of previous research on the use of opioid pain medication among individuals with opioid abuse or dependence have suggested that such use could reflect a lack of coordination between specialists providing addiction treatment and those treating patients for pain. However, representatives from a physician group we interviewed noted that patients diagnosed with opioid addiction suffer from the same issues that would result in an opioid prescription for patients without an addiction, such as recovery after surgery. In addition, those with opioid addiction also face additional medical issues resulting from their addiction that may warrant treatment with pain medication. However, these representatives also emphasized that the prescription of opioids for someone diagnosed with opioid abuse or dependence is nonetheless a “medical crisis” that requires a high level of attention. In addition, representatives from this physician group advised providers to ensure that there is no better treatment option available, and follow up with patients regularly. Recent state actions seek to address and prevent opioid abuse and ensure appropriate prescribing of opioid pain medication. All four selected states have implemented some form of prescription drug monitoring program, although the extent to which providers are required to participate varies by state. According to state officials in New York and West Virginia, use of the prescription drug monitoring database is mandated for providers in those states. Iowa officials said that providers must review the state’s prescription drug monitoring database before obtaining prior authorization from Medicaid to prescribe certain opioid pain medications, and Washington encourages, but does not require, providers to enroll in and use the state’s database. The selected states have also initiated provider education campaigns. For example, officials in Iowa said the state has worked to educate providers and pharmacists by providing them with individual patient profiles and narcotics reports for patients with three or more prescribers, while West Virginia has provided feedback to providers about their prescribing patterns. In addition to the contact named above, William Black (Assistant Director), Hannah Locke (Analyst-in-Charge), Britt Carlson, Giselle Hicks, Drew Long, Diona Martyn, Sean Miskell, Vikki Porter, and Emily Wilson made key contributions to this report.
Behavioral health conditions disproportionately affect low-income populations. Treatment can improve individuals' symptoms and help avoid negative outcomes. The expansion of Medicaid to cover low-income adults in some states—authorized by PPACA—may have increased the demand for such treatment. However, little is known about the extent to which Medicaid expansion enrollees experienced behavioral health conditions or utilized treatment during the first year of expansion in 2014. GAO was asked to provide information about the utilization of behavioral health treatment among Medicaid expansion enrollees during the first year of expansion in 2014. For selected states in 2014, this report describes (1) the population of Medicaid expansion enrollees with behavioral health diagnoses, and (2) the use of behavioral health treatment among Medicaid expansion enrollees. GAO selected four expansion states—Iowa, New York, Washington, and West Virginia—based on, among other criteria, availability and reliability of Medicaid enrollment and utilization data. GAO analyzed Medicaid data on behavioral health diagnoses and treatment use for expansion enrollees for 2014, the most recent year available. GAO also reviewed documents and interviewed Medicaid officials from all selected states to understand how data were recorded, and how treatment for expansion enrollees compared with what was available prior to expansion. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. In four selected states, from 17 to 25 percent of enrollees who were covered by state expansions of Medicaid—authorized by the Patient Protection and Affordable Care Act (PPACA)—had diagnosed behavioral health conditions (mental health and substance use conditions) in 2014. Mental health conditions were more common than substance use conditions; from 11 to 20 percent of expansion enrollees were diagnosed with a mental health condition, compared with 6 to 8 percent diagnosed with a substance use condition. The most common mental health condition category was mood disorders, such as depression. For substance use, substance-related conditions (e.g., addiction to drugs like opioids) were more prevalent than alcohol-related conditions. From 20 to 34 percent of expansion enrollees in the four selected states received behavioral health treatment in 2014, which includes outpatient services such as psychotherapy or prescription drugs. Treatment rates exceeded rates of diagnosed conditions, in part, because prescription drugs are not recorded with diagnosis codes. Thus, enrollees who only used behavioral health prescription drugs—and no outpatient services—were not counted in the diagnosis totals. The two most commonly used behavioral health service categories were psychotherapy services (visits with a provider aimed at reducing and managing symptoms) and diagnostic services, such as diagnostic evaluations. Antidepressants were the most commonly used behavioral health prescription drug category; over two-thirds of expansion enrollees who used a behavioral health drug took an antidepressant. Officials in three of the four selected states said that expansion enrollees likely had greater access to behavioral health treatment after enrolling in Medicaid. Officials from Iowa, Washington, and West Virginia reported that, compared to being uninsured, expansion enrollees could more easily access treatment, such as community-based mental health services and behavioral health prescription drugs. Officials in New York said expansion enrollees experienced less of a change, because most of its enrollees were previously eligible for Medicaid.
While NARA’s fiscal year 2011 expenditure plan meets four of the six legislative conditions, the lack of critical capital planning and oversight steps—including documentation demonstrating approval of significant changes to a recent ERA increment, post-implementation reviews of deployed capabilities, and OMB’s approval of the expenditure plan—limits NARA’s ability to ensure that the system is being implemented at an acceptable cost and within expected time frames and contributes to observable improvements in mission performance. These issues are further exacerbated by the agency’s partial implementation of several open GAO recommendations, such as those related to improving investment oversight and earned value processes. With significant weaknesses in many basic oversight and management processes, as well as continued delays in completing Increment 3, NARA’s ability to make significant development progress in the remainder of the fiscal year will be challenged. In addition, without a reliable ERA expenditure plan, NARA has not provided adequate information to assist congressional oversight and informed decision making related to the use of appropriated funds. When these weaknesses are combined with the lack of prioritization of the remaining requirements under negotiation for fiscal year 2011, Congress has little assurance that additional funds allocated to ERA will result in significant benefits to potential users. With OMB’s direction to stop development after 2011, it is unclear whether NARA will be able to effectively address the full range of weaknesses we identified and still have adequate time to complete significant development efforts. The identified deficiencies in NARA’s expenditure plan and management of the ERA acquisition make it unclear whether NARA can make substantial progress in delivering additional ERA system capabilities that justify its planned investment by the end of fiscal year 2011. As such, we suggest that Congress consider employing an accountability mechanism that limits NARA’s ability to use funds appropriated for ERA development until NARA implements an adequate capital planning and investment control process, updates its expenditure plan to clearly describe what system capabilities and benefits are to be delivered in fiscal year 2011, and establishes an associated set of prioritized system requirements and adequate earned value reporting. We are recommending that the Archivist of the United States immediately take the following two actions while the current system development contract is active: Report to Congress on the specific outcomes to be achieved with the balance of any previous multiyear funds in fiscal year 2011. Ensure that the ERA requirements planned for fiscal year 2011 are fully prioritized so that those most critical to NARA’s customers and other stakeholders are addressed. To ensure that any future efforts are completed within reasonable funding and time constraints, we are recommending that the Archivist of the United States take the following four actions: Ensure that significant changes to ERA’s program’s cost, schedule, and scope are approved through NARA’s investment review process. Conduct post-implementation reviews of deployed ERA capabilities to validate estimated benefits and costs. Submit ERA expenditure plans to OMB for review and approval prior to submitting to Congress. Update the ERA Requirements Management Plan and related guidance to mandate requirements prioritization throughout the project’s life-cycle. In written comments on a draft of this report, which are reprinted in appendix II, the Archivist of the United States concurred with our six recommendations. Specifically, he stated that NARA has sufficiently addressed the first two recommendations. He further stated that NARA would be unable to address the final four recommendations in a near-term action plan since those were specific to a future ERA development effort. The Archivist also noted that NARA is developing an addendum to the fiscal year 2011 expenditure plan to provide updated information on ERA requirements, costs, and the schedule of software releases. We are sending copies of this report to the Archivist of the United States. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. On October 1, 2010, the agency submitted its fiscal year 2011 expenditure plan to the relevant House and Senate appropriations committees to support its request for $85.5 million in ERA funding. Of this amount, $61.8 million consists of multi-year funds allocated to ERA. In the expenditure plan, NARA also included support for requests at two alternative funding levels—$72.0 million and $61.4 million—based on congressional direction. Subsequently, on October 19, 2010, NARA submitted a summary of its expenditure plan to the appropriations committees that included revised requests at $72 million and $65 million funding levels. According to NARA, both the expenditure plan and the summary reflect fiscal year 2011 as the final year of ERA development. In addition to the individual named above, key contributions to this report were made by James R. Sweetman, Jr., Assistant Director; Monica Perez- Nelson; Eric Costello; Lee McCracken; Tarunkant Mithani; Karl Seifert; Jonathan Ticehurst; and Adam Vodraska.
Since 2001, the National Archives and Records Administration (NARA) has been working to develop an Electronic Records Archive (ERA) to preserve and provide access to massive volumes of all types of electronic records. NARA originally planned to complete the system in 2012, but has repeatedly revised the program schedule and estimated cost and is now planning to deploy an ERA system with reduced functionality by the end of fiscal year 2011. As required by the Consolidated Appropriations Act, 2010, and the Continuing Appropriations Act, 2011, NARA submitted an expenditure plan to Congress to support its request for fiscal year 2011 ERA funding. The legislation also requires that this plan meet six conditions, including review by GAO. GAO's objectives in reviewing the fiscal year 2011 plan were to (1) determine whether the plan satisfies legislative conditions, (2) determine the extent to which NARA has implemented prior GAO recommendations, and (3) provide any other observations on the plan or the ERA acquisition. To do this, GAO reviewed the expenditure plan and other agency documents and interviewed NARA officials. NARA's fiscal year 2011 expenditure plan satisfies four of the six legislative conditions and partially satisfies two. Specifically, it partially satisfies the condition that NARA meet requirements for reviewing the progress of capital investments, such as ERA. While NARA has held regular meetings with senior-level agency management to review ERA progress, these groups did not document approval of important schedule and scope changes, and NARA did not validate the estimated benefits and costs of deployed ERA capabilities. Further, NARA partially satisfies the condition that the expenditure plan be approved by NARA and the Office of Management and Budget (OMB). NARA approved the expenditure plan in October 2010, but the plan was not approved by OMB. Without approval from OMB, Congress will have limited assurance of the plan's reliability and accuracy. NARA has fully implemented one of GAO's four prior recommendations and partially implemented three. It implemented a recommendation to ensure that ERA's requirements are managed using a disciplined process by, for example, developing a process to keep requirements current. NARA partially implemented three other recommendations. First, to improve its executive-level oversight, NARA documented meetings to review ERA progress, but did not document approval of important changes to a recent phase, or increment, of the system. Second, NARA added information in its expenditure plan on ERA cost, schedule, and performance as recommended, but the plan lacks other key information, such as the estimated costs of an ongoing increment. Third, NARA documented a plan to strengthen its processes for measuring program progress, but continues to have weaknesses in this area, including not accurately portraying ERA program status. GAO has three observations on the expenditure plan and ERA acquisition: (1) The fiscal year 2011 expenditure plan does not provide a reliable basis for informed investment decision making. For example, NARA's cost estimates do not reliably reflect the work to be completed because of weaknesses in its supporting methodology, and the plan does not clearly show what functionality is planned to be delivered in the final year of development, by when, and at what cost. (2) NARA's expenditure plan does not address how remaining multiyear funds from fiscal year 2010 will be allocated. Specifically, NARA's plans for using the remaining $20.1 million are not discussed in the plan. (3) Although NARA recently updated the ERA requirements, the agency has not yet determined which of the requirements would be addressed before the end of development in fiscal year 2011 and has not fully prioritized the requirements to ensure that critical stakeholder needs will be met. Without a reliable expenditure plan and adequate management of the ERA acquisition, it is unclear whether NARA can make substantial progress in delivering additional system capabilities by the end of fiscal year 2011 to justify its planned investment. Congress should consider limiting funding of further ERA development until NARA addresses weaknesses in its oversight and management of the acquisition. GAO is also recommending actions for NARA to take to address these weaknesses. NARA concurred with GAO's recommendations.
ATF is the chief enforcer of explosives laws and regulations in the United States and is responsible for licensing and regulating explosives manufacturers, importers, dealers, and users. ATF is also responsible for regulating most, but not all, explosives storage facilities. ATF’s regulatory authority over explosives stems from the Organized Crime Control Act of 1970. This statute imposed controls over the importation, manufacture, distribution, and storage of explosives, and was the basis for giving ATF enforcement responsibilities for these controls. The Safe Explosives Act expanded ATF’s authority to generally require licenses for persons who purchase or receive explosives and background checks on licensees and their employees who handle explosives. Under federal explosives regulations, a license is required for persons who manufacture, import, or deal in explosives and, with some exceptions, for persons who intend to acquire explosives for use. No license is required solely to operate an explosives storage facility. However, all persons who store explosive materials (including state and local agencies) must conform with applicable ATF storage regulations, irrespective of whether they are required to obtain an explosives license for other purposes. State and local agencies are not required to obtain an explosives license to use and store explosives. Similarly, federal government agencies, the U.S. military, and other federally owned or operated establishments are exempt from compliance with both the licensing and the storage regulations. According to ATF data, as of February 2005 there were 12,028 federal explosives licensees in the United States. Roughly 7,500 of these had some kind of explosives storage facility, consisting of 22,791 permanent or mobile storage magazines. ATF storage regulations include requirements relating to the safety and security of explosives storage magazines—that is, any building or structure (other than an explosives manufacturing building) used for storage of explosive materials. Regarding safety, the storage regulations include requirements related to location, construction, capacity, housekeeping, interior lighting, and magazine repairs, as well as a requirement that the local fire safety authority be notified of the location of each storage magazine. Regarding security, the ATF storage regulations include the following requirements: Explosives handling. All explosive materials must be kept in locked magazines unless they are in the process of manufacture, being physically handled in the operating process of a licensee or user, being used, or being transported to a place of storage or use. Explosives are not to be left unattended when in portable storage magazines. Magazine construction. Storage magazines must be theft-resistant and must meet specific requirements dealing with such things as mobility, exterior construction, door hinges and hasps, and locks. Magazine inspection. Storage magazines must be inspected at least every 7 days. This inspection need not be an inventory, but it must be sufficient to determine if there has been an unauthorized entry or attempted entry into the magazines, or unauthorized removal of the magazine contents. Magazine inventory. Within the magazine, containers of explosive materials are to be stored so that marks are visible. Stocks of explosive materials are to be stored so they can be easily counted and checked. Notwithstanding the security requirements described above, ATF storage regulations do not require explosives storage facilities to have any of the following physical security features—fences, restricted property access, exterior lighting, alarm systems, or electronic surveillance. Also, while ATF licensing regulations require explosives licensees to conduct a physical inventory at least annually, there is no similar inventory requirement in the storage regulations applicable to other persons who store explosives. According to ATF data, the number of reported state and local government thefts is relatively small when compared with the total number of thefts that have occurred nationwide. For example, during a recent 3-year period (January 2002—February 2005), 9 thefts involving state and local government storage facilities were reported. Of these, 5 involved state and local law enforcement agencies (1 was later determined to be the possible result of training explosives that had been mistakenly discarded), 3 others involved state government entities (all universities), and the remaining incident took place at a county highway department. Two of the 9 incidents occurred in California (including last year’s theft that was mentioned previously), and no other state reported more than one incident. By comparison, during this same period, ATF received reports of 205 explosives thefts nationwide from all sources. Three states— California, Texas, and Pennsylvania—accounted for about one-quarter (53) of the total reported thefts nationwide. According to ATF officials, this may be due to the larger numbers of explosives licensees and storage magazines located in these three states. The amounts of explosives reported stolen or missing from state and local government facilities in each reported incident of theft are also relatively small when compared with the total amounts of stolen and missing explosives nationwide. For example, during a recent 10-month period for which data were available (March 2003 through December 2003), there were a total of 76 theft incidents nationwide reported to ATF, amounting to a loss of about 3,600 pounds of high explosives, 3,100 pounds of blasting agents, 1,400 detonators, and 2,400 feet of detonating cord and safety fuse. By comparison, over an entire 10-year period (January 1995—December 2004), ATF received only 14 reports of theft from state and local law enforcement storage magazines. In 10 of these incidents, less than 50 pounds of explosives was reported stolen or missing; while in 3 of the incidents, more than 50 pounds was stolen or missing. In 6 of these 14 cases, ATF data indicate that most or all of the explosives were recovered; in the other 8 cases, none of the explosives have been recovered. While the ATF theft data indicate that thefts from state and local facilities make up only a small part of the overall thefts nationwide, these reports may be understated by an unknown amount. There are two federal reporting requirements relating to the theft of explosives. One is specific to all federal explosives licensees (and permittees) and requires any licensee to report any theft or loss of explosives to ATF within 24 hours of discovery. The second reporting requirement generally requires any other “person” who has knowledge of the theft or loss of any explosive materials from his stock to report to ATF within 24 hours. Although the term “person” as defined in law and regulation does not specifically include state and local government agencies, ATF has historically interpreted this requirement as applying to nonlicensed state and local government explosives storage facilities. With respect to the second reporting requirement, according to ATF, the legislative history of the Organized Crime Control Act of 1970 indicates that Congress believed visibility over all incidents of stolen explosives was necessary to effectively enforce any federal explosives regulatory statute—primarily because of the special problems presented by stolen explosive materials and the persons possessing them. While ATF has interpreted the theft reporting requirement as applying to state and local government explosives storage facilities, ATF officials acknowledged that state and local government entities could be unsure as to their coverage under the theft reporting requirements. As a result, some state and local government entities may not know they are required to report such incidents to ATF, and this lack of information could impair ATF’s ability to monitor these incidents and take appropriate investigative action. Indeed, during our site visits and other state and local contacts, we identified five state and local government entities that had previously experienced an incident of theft or reported missing explosives—two involving local law enforcement agencies, two involving state universities, and one involving a state department of transportation. However, one of these five incidents did not appear in ATF’s nationwide database of reported thefts and missing explosives. Based on these findings, the actual number of thefts occurring at state and local government storage facilities nationwide could be more than the number identified by ATF data. With certain exceptions (such as for federal agencies), federal explosives law requires all persons who store explosives to conform to applicable regulations. However, there is no ATF oversight mechanism in place to ensure that state and local government facilities have complied with the regulations. With respect to private sector entities, ATF’s authority to oversee explosives storage facilities is primarily a function of its licensing process. However, the licensing requirements described in the law and regulations above do not apply to the transportation, shipment, receipt, or importation of explosive materials to any state or its political subdivision (such as a city or county). That is, government entities, such as state and local law enforcement agencies, are not required to obtain a federal license to use and store explosives. In addition, ATF does not have specific statutory authority to conduct regulatory inspections at state and local explosives storage facilities. As a result, these facilities are not subject to mandatory oversight under ATF’s licensing authority or any ATF regulatory inspection authority apart from the licensing process. ATF may inspect state and local government storage facilities under certain circumstances—for example, if the operator of the facility voluntarily requests ATF to conduct an inspection. Since January 2002, ATF has conducted 77 voluntary inspections at state and local storage facilities—34 inspections at facilities that ATF shares with state and local agencies and 43 inspections at other state and local facilities. These inspections basically involve checking for compliance with federal storage regulations, including verifying proper construction of the storage magazine and verifying that the amount of explosives stored is consistent with the approved table of distances. In addition to conducting voluntary inspections, ATF also conducts inspections of state and local explosives storage magazines that are shared by ATF and a state or local agency. ATF currently shares space in 52 storage magazines, including 33 that are owned or leased by state and local agencies. Shared magazines are subject to mandatory inspections by ATF, and the inspection procedures are basically the same as those described above for voluntary inspections. According to ATF officials, no significant or systemic safety or security problems have been found during inspections of state and local storage magazines. However, regarding those state and local government facilities that ATF does not inspect, ATF officials acknowledged they had no way of knowing the extent to which these facilities are complying with federal storage regulations. By comparison with state and local government entities, private sector licensees are subject to mandatory ATF oversight and inspection. Under provisions of the Safe Explosives Act, ATF is generally required to physically inspect a license applicant’s storage facility prior to issuing a federal explosives license—which effectively means at least one inspection every 3 years. This inspection is intended to verify that the applicant’s storage facility complies with federal regulations regarding safety and security, and the inspection requirement applies to original license applications, as well as renewals (with certain exceptions). In addition, the regulations allow ATF to inspect licensees at any time during business hours, for the purpose of inspecting or examining any records or documents required to be maintained and any explosive materials kept or stored at the premises. ATF officials stated that if the agency were to be required to conduct mandatory inspections of state and local government storage facilities, they would likely need additional resources to conduct these inspections because they are already challenged to keep up with inspections that are mandated as part of the explosives licensing requirements. One factor that affects ATF’s ability to meet inspection goals is that inspectors have to conduct inspections of licensed firearms dealers, manufacturers, and importers, as well as explosives licensees. As noted above, ATF must physically inspect explosives licensees at least once every 3 years—or about one-third (4,000) of the roughly 12,000 licensees each year. According to ATF officials, because license applications and renewals are not evenly distributed over this 3-year cycle, some years there may actually be more or less than 4,000 inspections per year. ATF currently has 723 field inspectors, 620 of whom regularly conduct explosives and firearms inspections (the others are in supervisory or administrative positions). About 20 percent of the inspection time is spent on explosives inspections; the remainder is spent on firearms. In July 2004, DOJ’s Office of the Inspector General (OIG) reported that ATF’s inspections program was being affected by staffing shortages. The OIG noted that in response to passage of the Safe Explosives Act, ATF had to divert resources from firearms inspections to conduct explosives inspections required under the act. The OIG report further stated that ATF had calculated (and reported to Congress) that it needed almost 1,800 inspectors—including 540 just for explosives inspections—to manage its existing inspection workload at that time. To help ATF carry out its explosives responsibilities, the conferees on the DOJ appropriations act for fiscal year 2005 directed funding increases in fiscal year 2005 for the hiring of an additional 31 explosives inspectors, for purposes of explosives investigations and regulatory compliance. In addition, the House Committee on Appropriations has recommended additional funding in ATF’s fiscal year 2006 appropriation, for the hiring of another 50 explosives inspectors. Despite these increases, giving ATF additional responsibility to oversee state and local government storage facilities could further tax the agency’s inspection resources. According to ATF officials, because of the legislative mandate to physically inspect explosives licensees, the effect of additional state and local government explosives responsibilities (without similar increases in inspector resources) could be to reduce the number of firearms inspections that ATF could conduct. According to ATF officials, ATF does not collect nationwide information on the number and location of state and local government explosives storage facilities, nor does the agency know the types and amounts of explosives being stored in these facilities. With respect to private sector licensees, ATF collects descriptive information concerning explosive storage facilities as part of the licensing process. However, state and local government explosive storage facilities are not required to obtain a license from ATF, and ATF does not have specific statutory authority to conduct regulatory inspections of such facilities. As a result, no systematic information about these facilities is collected. For those state and local government facilities that ATF does inspect—either voluntary inspections of state and local magazines or mandatory inspections of shared magazines—some information is collected by ATF. During these inspections, ATF collects information about the location of the storage magazines, the types and amounts of explosives stored, and whether the magazines are in compliance with federal storage regulations. According to ATF officials, the information obtained from these inspections—along with the results from inspections of licensees—are maintained in ATF’s N- Spect nationwide inspection database. While mandatory annual inspections are required by ATF at each of the 33 state and local magazines where ATF shares storage space, there have been only 77 voluntary inspections of state and local storage magazines since January 2002. ATF also has some ability to monitor state and local storage facilities at locations where ATF maintains its own storage magazine. ATF headquarters and field offices currently have 118 storage magazines colocated at facilities with state and local storage magazines. For these facilities, ATF collects information about the location of the facility and the inspection status of any state and local magazines on site. Of the 77 voluntary inspections discussed above, 34 were at these colocated facilities. By comparison, ATF collects a variety of information about private sector explosives storage facilities, primarily under its authority to issue explosives licenses. For example, ATF license application forms require applicants for an explosives license to submit information about their storage capabilities. Specific information applicants are required to submit to ATF includes the type of storage magazine, the location of the magazine, the type of security in place, the capacity of the magazine, and the class of explosives that will be stored. ATF also collects information about private sector storage facilities during mandatory licensee inspections. As noted previously, prior to issuing or renewing an explosives license, ATF must generally verify by on-site inspection that the applicant has a storage facility that meets the standards of public safety and security against theft as prescribed in the regulations. Thereafter, ATF may also inspect a licensee at any time during business hours—including inspection of storage magazines, examination of explosives inventory and sales records, and verification of compliance with ATF administrative rules. Because state and local government storage facilities are exempt from the licensing process described above, they are not required to submit licensing-related information about their storage facility to ATF and they are not subject to licensing-related mandatory ATF inspections. In addition, ATF does not have specific statutory authority to perform regulatory oversight inspections of such facilities apart from the licensing process. As a result, ATF is unable to collect complete nationwide information about where these facilities are or the types and amount of explosives they store. During the course of our audit work, we compiled data on state and local law enforcement bomb squads that would be likely to use and store explosives. At the 13 state and local law enforcement bomb squads we visited, we identified 16 storage facilities and 30 storage magazines. At these locations, the number of storage facilities ranged from 1 to 2, and the number of storage magazines ranged from 1 to 4. According to FBI data, there are currently 452 state and local law enforcement bomb squads nationwide. The total number of state and local government storage facilities and magazines nationwide, however, encompasses other entities in addition to law enforcement bomb squads—including other law enforcement agencies, public universities, and departments of transportation. The precise number of storage facilities and magazines at these locations is currently unknown. And because of the limited nature of our fieldwork, we cannot generalize about the extent of security and oversight these entities may have at their own explosives storage facilities. We found that security oversight measures varied at the 14 selected state and local government entities we visited. These 14 entities maintained a total of 18 storage facilities. With regard to physical security, 13 of the 18 storage facilities restricted vehicle access to the storage area. Six of the 18 storage facilities also had a barrier immediately surrounding the storage containers preventing human access. Official personnel at all 18 facilities said they patrolled or inspected the storage facility on a regular basis. Regarding electronic security, 4 of the 18 facilities had either an alarm or video monitoring system in place. Inventory and other oversight activities at all 14 of the state and local entities included regular, periodic inventories of the contents of their explosives storage facilities. In addition, a federal, state, or local government authority had performed inspections for 9 of the 14 entities. State government agencies in Colorado and Pennsylvania required 5 state and local entities we visited in these states to obtain licenses to operate their explosives storage facilities. However, 2 of these 5 entities did not have the required licenses in place at the time of our visit. Two of the 14 entities had experienced prior thefts or losses from their storage facilities, and we observed storage practices at four facilities that may not be in compliance with federal regulations. The following sections describe our observations of the explosives storage magazines and types of security measures in place at the 18 state and local storage facilities we visited. Our criteria for identifying the type of security measures in place included existing federal explosives storage laws and regulations (27 C.F.R., Part 555, Subpart K) and security guidelines issued by the explosives industry (the Institute of Makers of Explosives). Most of these security measures are not currently required under federal storage regulations (perimeter fencing, for example). However, we are presenting this information in order to demonstrate the wide range of security measures actually in place at the time of our visits. As shown in table 1, the 14 state and local government entities we visited included 11 city or county bomb squads (including police departments and sheriffs’ offices), 2 state bomb squads, and one public university. Four of the 14 state and local entities had two separate storage areas encompassing a total of 18 explosives storage facilities among the 14 entities. As further shown in table 1, 17 of these explosives storage facilities were located on state, city, county, or police property. These included 3 that were located on state property (such as state law enforcement or state university), 7 that were located at police training facilities, and 7 that were located on city or county government-owned property (such as correctional or water facilities). For example, one local entity we visited in Texas had a storage facility located on the grounds of a city-owned nature preserve. Also, 11 of the 18 explosives storage facilities we visited contained multiple magazines for the storage of explosives. As a result, the 18 facilities housed a total of 34 magazines divided into various types as shown in table 1. A Pennsylvania storage facility had 4 magazines, which was the largest number among the facilities we visited. Figures 1 through 3 depict different types of explosives storage magazines. All of the 18 facilities contained a variety of high explosives, including C-4 plastic explosive, TNT, binary explosives, and detonators. Officials from 13 of the state and local entities provided us with estimates of the quantities of explosives they were storing, and they reported amounts ranging from 10 to 1,000 pounds, with the majority of the entities (9) indicating they had 200 pounds or less. At the 18 storage facilities we visited, we looked for the presence of exterior and interior fencing, other barriers to restrict vehicle or pedestrian access, and security personnel. Federal explosives storage regulations do not require any of these physical security attributes; rather, the regulations generally require theft-resistant magazine construction (including locks) and weekly inspections of magazines. As shown in table 2, 13 of the 18 storage facilities restricted vehicle access to the facility grounds by way of a locked security gate or by virtue of being an indoor facility. Five of the 13 facilities restricted vehicle access after normal working hours (nights or nights and weekends). Officials at 7 other facilities said that vehicle access to the facilities was restricted at all times, including the indoor facility in Pennsylvania that was located in the basement of a municipal building. While these outdoor facilities had barriers to vehicle access via roadway, not all of the facilities were completely surrounded by fencing or some other perimeter barrier, nor do federal storage regulations require them to have such a barrier or fencing. Also, as shown in table 2, 6 of 18 storage facilities had an interior barrier immediately surrounding their storage magazines to prevent direct access by persons on foot. At each of these 6 facilities, the barriers consisted of a chain-link fence with a locked gate barring entry by unauthorized personnel (figure 4 reflects one of these facilities). At 1 other facility (the indoor facility in Pennsylvania), the storage magazines were in the basement of the municipal building, and multiple locked doors were used to prevent access by unauthorized personnel. Conversely, at 1 facility in Texas, the storage magazine could be reached on foot or by vehicle at any time because it did not have fencing or vehicle barriers to deter unauthorized access. Officials at all of the 18 storage facilities detailed in table 2 told us that official personnel patrolled or inspected the storage facility on a regular basis. For example, bomb squad officers said they regularly visited the facilities to check on their condition, in addition to visiting the facilities to retrieve or place explosive materials in them. In addition to bomb squad personnel, officials at 14 of the 18 facilities said that police officers from local police departments patrolled the facilities to check for any obvious signs of problems such as forced entry. However, these police patrols typically did not include actual entry into the storage magazines to inspect the explosives themselves. As further shown in table 2, officials at 9 of the 18 storage facilities we visited said that state or local government employees maintained a 24-hour presence at the facilities. Four of these storage facilities were located on the grounds of police training centers, where either trainees or facility personnel were present at all times. Two storage facilities–one each in Tennessee and Texas—were on the grounds of jail facilities where local correctional personnel worked 24 hours a day, 7 days a week. Two other storage facilities were located on the grounds of city/county water or sewage treatment plants, and 1 storage facility was located in the basement of a municipal building. One additional facility in Pennsylvania was located on the grounds of the police academy, but we were unable to determine whether there was 24-hour, on-site presence at that facility. At the 18 storage facilities we visited, we looked for the presence of a monitored alarm or video surveillance system. Although no electronic security is required under federal regulations, 4 of the 18 explosives storage facilities had either an alarm or a video monitoring system in place. Specifically, one entity in Texas had 2 facilities with monitored alarm systems in place, and two entities in Pennsylvania and Tennessee had video monitoring of their explosives storage facilities. The Texas entity had alarm systems in place at 2 of its storage facilities. At 1 facility, two small detonator magazines—while not alarmed themselves—were located inside a building protected by an alarm system. At a second facility, the door to an underground storage magazine was alarmed. Regarding the 2 facilities with video monitoring, the Tennessee facility— which was located on the grounds of a local correctional facility—took advantage of a video surveillance system already in place to monitor prisoners. The Pennsylvania facility with video monitoring was located inside a municipal building and was part of that building’s overall video security system. The remaining 14 of the 18 storage facilities did not have video or alarm systems in place at the time of our visit. Officials at 4 facilities told us they had alarm systems planned (funding not yet approved), and officials at 3 facilities said they had alarm systems pending (funding approved and awaiting installation). Officials at 2 facilities also told us they planned to install video monitoring. For example, 1 facility in Pennsylvania had its video monitoring system destroyed by lightning and was planning to replace it. However, the local government authority had yet to approve funding for the replacement. Several officials commented on the feasibility of installing alarm or video monitoring systems at explosives storage facilities. At 4 of the state and local entities we visited, officials noted that these storage facilities are often located in remote areas without easy access to sources of electricity. The officials added that this lack of necessary electrical infrastructure could be a cost-prohibitive barrier if they were required to install some form of electronic monitoring at explosives storage facilities. Regarding the possibility of new storage regulations that would require an electronic security system at explosives storage magazines, officials at 9 state and local entities told us they would not object to such a requirement as long as it did not create an undue financial burden. At the 14 state and local entities we visited, we looked for the presence of internal inventory procedures, internal or external inspections, and licensing of the storage facility by state or local government agencies . Under federal regulations, only explosives licensees are required to perform periodic inventories and are subject to periodic ATF regulatory inspections of their storage facilities; other persons who store explosives are not. As shown in table 3, officials at all 14 of the entities we visited told us they performed periodic inventories of the contents of their explosives storage facilities. Typically, during these inventories, officials said they count all the explosives in the storage facility and reconcile them with inventory records maintained either manually or in a computerized database. In addition, 9 of the 14 state and local entities said they had received inspections of their storage facilities, and ATF had conducted the inspections in all but 1 of these 9 cases. Regarding this 1 case in Pennsylvania, officials said a state government authority regularly performed these inspections. With regard to the ATF inspections, state or local operators of the selected facilities voluntarily requested these inspections in all but 1 case (as discussed further below). Of the 9 selected state and local entities that received inspections by a regulatory authority, 6 entities told us they received them on a periodic basis, with another 3 entities having received a onetime inspection (all by ATF). A Pennsylvania entity that said it received annual inspections from ATF was unique among those we visited because it had also received a onetime inspection from a local government authority. This entity was the only one we visited to have any type of inspection, either onetime or periodic, from a local government authority. Last, a Colorado entity, another of the 5 receiving periodic inspections, said it was being inspected on a recurring basis by both ATF and a state government authority—it was the only entity we visited that fell into this category. Also as shown in table 3, 5 of the 14 state and local entities we visited told us they were required to obtain a license from state regulatory authorities to operate their explosives storage facilities. One entity in Colorado had a license to store explosives issued by the state, and this entity had also obtained a federal explosives license issued by ATF. Officials at this location told us that the state required them to obtain the federal license before applying for its mandatory state license. Indeed, according to ATF officials, state or local government facilities may apply for a federal explosives license if it is required by their state regulatory agency. However, once such a license is issued, these state and local government facilities must then comply with all the same explosives laws and regulations that are applicable to licensed private sector facilities. Officials at the 14 state and local entities we visited commented on the feasibility of mandatory ATF oversight of their explosives storage facility. Officials at 13 state and local entities said they did not object to the possibility of federal licensing or inspection of their explosives storage facilities. Six state and local entities also said that they already have close contacts with ATF and would not be concerned about additional ATF oversight of their storage facilities. Officials at 3 state and local entities noted that additional federal oversight was not a concern, as long as they were not held to a higher standard of security and safety than ATF requires of private industry. Two of the 14 state and local entities we visited had previously had a theft or loss of explosives from one of their storage facilities. At a storage facility in Texas, officials told us that criminals had once used a cutting torch to illegally gain entry to an explosives storage magazine. Some explosives were stolen, but the suspected thieves were later apprehended and the materials were recovered. At another storage facility in Colorado, officials said that an unauthorized individual had obtained keys to an explosives storage magazine and taken some of the material. As with the previous case, several individuals were apprehended and the materials were recovered. One of these incidents (the theft of explosives in Texas) did not appear in ATF’s nationwide database of reported thefts and missing explosives. The law enforcement community has recently taken action to address the issue of thefts and security at law enforcement explosives storage facilities. In April 2005, the National Bomb Squad Commanders Advisory Board—which represents more than 450 accredited law enforcement bomb squads nationwide—initiated a program to increase security awareness at its members’ explosives storage facilities. In a letter to its membership, the advisory board encouraged all bomb squad commanders to increase diligence regarding explosives storage security. The advisory board also recommended that all bomb squads request a voluntary ATF inspection, ensure they maintain an accurate explosives inventory, and assess the adequacy of security measures in place at their respective explosive storage facilities to determine whether additional measures might be required—such as video monitoring, fencing, and alarms. This is a voluntary program, and it is too soon to tell what effect, if any, it will have towards enhancing the security at state and local law enforcement storage facilities and reducing the potential for thefts. At 4 of the 14 state and local entities we visited, we observed various storage practices that may not be in compliance with federal explosives regulations. These circumstances appeared to be related to storage safety issues, rather than storage security. For example, one explosives storage facility was located in the basement of a municipal building and utilized small type 3 temporary magazines (known as day-boxes) for permanent storage of high explosives and detonators. ATF regulations state that these magazines should be used only for temporary storage of explosives and may not be left unattended. At another storage facility, a high explosives storage magazine housed a small detonator (or “cap”) magazine in its interior, although ATF regulations generally require detonators to be kept separate from other explosive materials. Another storage facility contained several boxes of unmarked, 1970s-era plastic explosives (specifically C-4), possession of which is generally restricted under federal explosives law when the material in question does not contain a detection agent. Finally, an official at one storage facility acknowledged that because of the weight of explosives currently being stored, their storage magazine was in violation of ATF regulations concerning allowable distances from other inhabited structures. The overall number of state and local government explosives storage facilities, the types of explosives being stored, and the number of storage magazines associated with these facilities are currently unknown. Further, because ATF does not oversee state and local government storage facilities as part of the federal licensing process and ATF does not have any other statutory authority to conduct regulatory inspections of these facilities, ATF’s ability to monitor the potential vulnerability of these facilities to theft or assess the extent to which these facilities are in compliance with federal explosives storage regulations is limited. Nevertheless, current federal explosives law as enacted by Congress does not provide ATF with specific authority to conduct regulatory oversight with respect to public sector facilities. And although we did observe possible noncompliance with the storage regulations at some of the state and local entities we visited, none of these circumstances appeared to make the facilities more vulnerable to theft. According to ATF’s interpretation of federal explosives laws and regulations, state and local government agencies—including law enforcement bomb squads and public universities—are required to report incidents of theft or missing explosives to ATF within 24 hours of an occurrence. However, during the course of our audit work, we identified five incidents involving theft or missing explosives at state and local government facilities, one of which had not been reported to ATF. Because this reporting requirement applies to any “person” who has knowledge of a theft from his stock and the definition of “person” does not specifically include state and local government agencies, ATF officials acknowledged that these entities may be unsure as to whether they are required to report under this requirement. On the basis of our limited site visit observations and discussions with state, local, and ATF officials, we did not identify a specific threat or vulnerability to theft among state and local government explosive storage facilities. However, if state and local government entities are unsure about whether they are required to report thefts and missing explosives, ATF’s ability to monitor these incidents and take appropriate investigative action may be compromised by a potential lack of information. To allow ATF to better monitor and respond to incidents of missing or stolen explosives, we recommend that the Attorney General direct the ATF Director to clarify the explosives incident reporting regulations to ensure that all persons and entities who store explosives, including state and local government agencies, understand their obligation to report all thefts or missing explosives to ATF within 24 hours of an occurrence. On September 9, 2005, we provided a draft of this report to the Attorney General for review and comment. On September 26, 2005, DOJ advised us that the department had no formal agency comments and further advised us that DOJ agreed with our recommendation and would take steps to implement it. ATF provided technical comments, which we have incorporated into the report, as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of the report. At that time, we will send copies of this report to the House Committee on Government Reform; House Committee on the Judiciary; House Committee on Homeland Security; Senate Committee on the Judiciary; Senate Committee on Homeland Security and Governmental Affairs; the Attorney General; the Director of the Bureau of Alcohol, Tobacco, Firearms and Explosives; appropriate state and local government officials; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or EkstrandL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were William Crocker, Assistant Director; David Alexander; Amy Bernstein; Philip Caramia; Geoffrey Hamilton; and Michael Harmond. In reviewing the security of state and local government explosives storage facilities, we focused on the Bureau of Alcohol, Tobacco, Firearms and Explosives’ (ATF) role in overseeing and regulating these facilities, including the extent to which ATF’s licensing operations address state and local government facilities and what authority ATF has to enforce federal explosives law and regulations at state and local government facilities. In addition, we reviewed the extent to which ATF collects information about state and local government facilities, including locations and types and amount of explosives stored. To determine what states and localities are doing to ensure the safe and secure storage of explosives, we visited state and local explosives storage facilities. We also contacted other state and local government and explosives industry officials. To obtain the perspectives of U.S. government agencies on their efforts to regulate and oversee state and local government explosives storage facilities, we met with ATF headquarters officials—specifically individuals from ATF’s Office of Enforcement Programs and Services, Office of Field Operations, Office of Strategic Intelligence and Information, and Office of the Chief Counsel. We also met with or obtained information from officials with the Department of Justice’s (DOJ) Office of the Inspector General, as well as the Federal Bureau of Investigation’s (FBI) Bomb Data Center. To determine what selected states and localities were doing to ensure the safe and secure storage of explosives in state and local government facilities, we met with state and local officials in Colorado, Pennsylvania, Tennessee, and Texas—specifically 13 state and local bomb squads and 1 public university. In these four states, we also contacted other state government agencies, including transportation and environmental protection agencies and fire marshals. We also contacted other public university officials in Arizona, New Mexico, and Utah. To obtain additional perspectives of law enforcement and explosives industry experts on the safety and security of explosives storage facilities, we contacted representatives from the National Bomb Squad Commanders Advisory Board, the International Association of Bomb Technicians and Investigators, the Institute of Makers of Explosives, the International Association of Chiefs of Police, and the National Sheriffs’ Association. To find out what selected states and localities are doing to ensure the safe and secure storage of explosives in state and local government facilities, we visited 14 state and local government entities that stored explosives, as shown in table 4 below. During these site visits, we met with state and local officials and physically observed their explosives storage facilities and storage magazines. We chose law enforcement bomb squads as the primary focus of our site visits because (1) we concluded that state and local bomb squads would be the most likely state and local government agencies to have a need for explosives storage facilities and (2) there was no other source of nationwide information on the number and location of state and local government explosives storage facilities. We selected our state and local site visits based on the following criteria: In selecting which states to visit, we chose those states most likely to have significant state and local government and private sector explosives activities. The state selection criteria included (1) the number of federally licensed private sector explosives companies, (2) the number of reported explosives thefts, and (3) the number of law enforcement bomb squads. Using these criteria, we then selected a geographic mix of states—specifically one state in the northeast United States, one in the southeast, one in the southwest, and one in the west. Within each state, we selected state and local bomb squads for our site visits. These were chosen to represent a mix based on the type of agency, size of jurisdiction, and geographic location. We selected two state law enforcement agencies, one county sheriff, nine city police departments, and one city fire department. These included three jurisdictions with populations over 1 million, five with populations between 100,000 and 1 million, three with populations below 100,000, and two with statewide jurisdictions. We also selected one non-law enforcement explosives storage facility operated by a state university. This facility was selected as typical of the various state universities with mining-engineering programs and because it had a significant amount of explosives (over 100 pounds) in its storage facility. During our site visits, we used a semistructured interview guide to conduct interviews with state and local officials and determine the level of security at their explosives storage facilities. Our criteria for identifying the types of security measures in place included existing federal storage laws and regulations (27 C.F.R., Part 555, Subpart K) and security guidelines issued in 2005 by the explosives industry (the Institute of Makers of Explosives). Not all of the security criteria we used are currently requirements under federal storage regulations (perimeter fencing, for example). We used these additional criteria to demonstrate the wide range of security measures actually in place at the time of our visits to these facilities. Also, while we were not conducting a compliance audit, during our site visits we observed each storage magazine and noted any instances where explosives appeared not to be stored in compliance with federal regulations. We are not disclosing the names or other identifying information relating to the individual state and local entities we visited to ensure that security- related information is not unintentionally disclosed. Because our review was limited to a nonprobability sample of 14 state and local entities in the four states, the information discussed in this report is illustrative and cannot be generalized to all state and local government entities nationwide that store explosives. ATF provided data related to explosives licensing and inspections, as well as relevant law, regulations, and procedures dealing with the storage of explosives. We also obtained data from ATF’s Arson and Explosives National Repository related to incidents of theft and missing explosives reported to ATF. FBI provided us with a nationwide list of accredited bomb squads—including number, location, and name of agency. In addition, FBI provided policies and guidance related to the bomb squad training and accreditation process. The information we obtained from ATF (data on explosives licensees, explosives inspections, and explosives thefts) and FBI (data on the number and location of bomb squads) was used to provide background context on the number of private sector and state and local government explosives storage facilities and to assist us in selecting locations for our site visits. We interviewed agency officials knowledgeable about the data, and as a result, we determined that the data were sufficiently reliable for the purposes of this report. We also obtained data from ATF on incidents of explosive thefts and missing explosives at state and local government storage facilities. On the basis of our site visits and other audit work, we determined that these incidents may be underreported, as discussed earlier in this report.
More than 5.5 billion pounds of explosives are used each year in the United States by private sector companies and government entities. The Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has authority to regulate explosives and to license privately owned explosives storage facilities. After the July 2004 theft of several hundred pounds of explosives from a state and local government storage facility, concerns arose about vulnerability to theft. GAO analyzed (1) the extent of explosives thefts from state and local government facilities, (2) ATF's authority to regulate and oversee state and local government explosives storage facilities, (3) the information ATF collects about state and local government storage facilities, and (4) security oversight measures in place at selected state and local government storage facilities. Judging from available ATF data, there have been few thefts of explosives from state and local government storage facilities. From January 2002 to February 2005, ATF received only 9 reports of thefts or missing explosives from state and local facilities, compared with a total of 205 explosives thefts reported nationwide during this same period. During the course of our audit, we found evidence of 5 thefts from state and local government facilities, 1 of which did not appear in ATF's national database on thefts and missing explosives. Thus, the actual number of thefts occurring at state and local storage facilities could be higher than that identified by ATF data. ATF has no authority to oversee or inspect all state and local government explosives storage facilities. State and local government agencies are not required to obtain a license from ATF to use and store explosives, and only licensees--such as private sector explosives storage facilities--are subject to mandatory oversight. As a result, ATF has no means to ensure that state and local government facilities are in compliance with federal regulations. While ATF does not collect nationwide information about state and local government explosives storage facilities, information about some of these facilities is collected--for example, when facility operators voluntarily request an ATF inspection. Since January 2002, ATF has conducted 77 voluntary inspections at state and local storage facilities and found no systemic violations. By comparison, all licensed private sector facilities must submit a variety of information about their facility--including location and security measures in place--to ATF during the licensing process. ATF also collects information about these facilities during mandatory inspections. At the 18 state and local government storage facilities we visited, a variety of security measures were in place, including locked gates, fencing, patrols, and in some cases, electronic surveillance. All the facilities' officials told GAO that they conducted routine inventories. But most were not required to be licensed or inspected by state or local regulatory agencies. We identified several instances of possible noncompliance with federal regulations, related primarily to storage safety issues rather than security.
Although the specific duties police officers perform may vary among police forces, federal uniformed police officers are generally responsible for providing security and safety to people and property within and sometimes surrounding federal buildings. There are a number of federal uniformed police forces operating in the Washington MSA, of which 13 had 50 or more officers as of September 30, 2001. Table 1 shows the 13 federal uniformed police forces included in our review and the number of officers in each of the police forces as of September 30, 2002. The enactment of the Homeland Security Act on November 25, 2002, had consequences for federal uniformed police forces. The act, among other things, established a new DHS, which includes 2 uniformed police forces within the scope of our review—the Federal Protective Service and the Secret Service Uniformed Division. Another component of DHS is TSA, a former component of the Department of Transportation. TSA includes the Federal Air Marshal Service, designed to protect domestic and international airline flights against hijacking and terrorist attacks. During fiscal year 2002, the Federal Air Marshal Program increased its recruiting significantly in response to the terrorist attacks of September 11, 2001. However, by fiscal year 2003, the buildup had been substantially completed. Because Federal Air Marshals are not limited to the grade and pay step structure of the federal government’s General Schedule, TSA has been able to offer recruits higher compensation and more flexible benefit packages than many other federal police forces. Federal uniformed police forces operate under various compensation systems. Some federal police forces are covered by the General Schedule pay system and others are covered by different pay systems authorized by various laws. Since 1984, all new federal employees have been covered by the Federal Employees Retirement System (FERS). Federal police forces provide either standard federal retirement benefits or federal law enforcement retirement benefits. Studies of employee retention indicate that turnover is a complex and multifaceted problem. People leave their jobs for a variety of reasons. Compensation is often cited as a primary reason for employee turnover. However, nonpay factors, such as age, job tenure, job satisfaction, and job location, may also affect individuals’ decisions to leave their jobs. During recent years, the federal government has implemented many human capital flexibilities to help agencies attract and retain sufficient numbers of high-quality employees to complete their missions. Human capital flexibilities can include actions related to such areas as recruitment, retention, competition, position classification, incentive awards and recognition, training and development, and work-life policies. We have stated in recent reports that the effective, efficient, and transparent use of human capital flexibilities must be a key component of agency efforts to address human capital challenges. The tailored use of such flexibilities for recruiting and retaining high-quality employees is an important cornerstone of our model of strategic human capital management. Eight of the 13 police forces reported difficulties recruiting officers from a moderate to a very great extent. Despite recruitment difficulties faced by many of the police forces, none of the police forces used important human capital recruitment flexibilities, such as recruitment bonuses and student loan repayments, in fiscal year 2002. Some police force officials reported that the human capital recruitment flexibilities were not used for various reasons, such as limited funding or that the flexibilities themselves were not available to the forces during the fiscal year 2002 recruiting cycle. Officials at 4 of the 13 police forces (Bureau of Engraving and Printing Police, the Federal Bureau of Investigation (FBI) Police, Federal Protective Service, and NIH Police) reported that they were having a great or very great deal of difficulty recruiting officers. In addition, officials at 5 police forces reported that they were having difficulty recruiting officers to a little or some extent or to a moderate extent. Among the reasons given for recruitment difficulties were: low pay; the high cost of living in the Washington, D.C., metropolitan area; difficulty completing the application/background investigation process; better retirement benefits at other law enforcement agencies. Conversely, officials at 4 of the 13 police forces (Library of Congress Police, the Supreme Court Police, U.S. Mint Police, and U.S. Postal Service Police) reported that they were not having difficulty recruiting officers. Library of Congress officials attributed their police force’s lack of difficulty recruiting officers to attractive pay and working conditions and the ability to hire officers at any age above 20 and who also will not be subject to a mandatory retirement age. Supreme Court officials told us that their police force had solved a recent recruitment problem by focusing additional resources on recruiting and emphasizing the force’s attractive work environment to potential recruits. U.S. Postal Service officials reported that their police force was not experiencing a recruitment problem because it hired its police officers from within the agency. Table 2 provides a summary of the level of recruitment difficulties reported by the 13 police forces. Although many of the police forces reported facing recruitment difficulties, none of the police forces used human capital recruitment tools, such as recruitment bonuses and student loan repayments, in fiscal year 2002. Total turnover at the 13 police forces nearly doubled from fiscal years 2001 to 2002. Additionally, during fiscal year 2002, 8 of the 13 police forces experienced their highest annual turnover rates over the 6-year period, from fiscal years 1997 through 2002. There were sizable differences in turnover rates among the 13 police forces during fiscal year 2002. NIH Police reported the highest turnover rate at 58 percent. The turnover rates for the remaining 12 police forces ranged from 11 percent to 41 percent. Of the 729 officers who separated from the 13 police forces in fiscal year 2002, about 82 percent (599), excluding retirements, voluntarily separated. About 53 percent (316) of the 599 officers who voluntarily separated from the police forces in fiscal year 2002 went to TSA. Additionally, about 65 percent of the officers who voluntarily separated from the 13 police forces during fiscal year 2002 had fewer than 5 years of service on their police forces. The total number of separations at all 13 police forces nearly doubled (from 375 to 729) between fiscal year 2001 and 2002. Turnover increased at all but 1 of the police forces (Library of Congress Police) over this period. The most significant increases in turnover occurred at the Bureau of Engraving and Printing Police (200 percent) and the Secret Service Uniformed Division (about 152 percent). In addition, during fiscal year 2002, 8 of the 13 police forces experienced their highest annual turnover rates over the 6-year period, from fiscal year 1997 through 2002. The turnover rates at the 13 police forces ranged from 11 percent at the Library of Congress Police to 58 percent at the NIH Police in fiscal year 2002. In addition to the NIH Police, 3 other police forces had turnover rates of 25 percent or greater during fiscal year 2002. The U.S. Mint Police reported the second highest turnover rate at 41 percent, followed by the Bureau of Engraving and Printing Police at 27 percent and the Secret Service Uniformed Division at 25 percent. There was no clear pattern evident between employee pay and turnover rates during fiscal year 2002. For example, while some police forces with relatively highly paid entry-level officers such as the Library of Congress Police (11 percent) and the Supreme Court Police (13 percent) had relatively low turnover rates, other police forces with relatively highly paid entry-level officers such as the U.S. Mint Police (41 percent), Bureau of Engraving and Printing Police (27 percent), and Secret Service Uniformed Division (25 percent) experienced significantly higher turnover rates. Additionally, turnover varied significantly among the 5 police forces with relatively lower paid entry-level officers. For example, while the Federal Protective Service (19 percent) and NIH Police (58 percent) entry-level officers both received the lowest starting pay, turnover differed dramatically. Likewise, no clear pattern existed regarding turnover among police forces receiving federal law enforcement retirement benefits and those receiving traditional federal retirement benefits. For example, entry-level officers at the Library of Congress Police, U.S. Capitol Police, and Supreme Court Police all received equivalent pay in fiscal year 2002. However, the Library of Congress (11 percent) had a lower turnover rate than the Capitol Police (13 percent) and Supreme Court Police (16 percent), despite the fact that officers at the latter 2 police forces received federal law enforcement retirement benefits. In addition, while officers at both the Park Police (19 percent) and Secret Service Uniformed Division (25 percent) received law enforcement retirement benefits, these forces experienced higher turnover rates than some forces such as U.S. Postal Service Police (14 percent) and FBI Police (17 percent), whose officers did not receive law enforcement retirement benefits and whose entry-level officers received lower starting salaries. More than half (316) of the 599 officers who voluntarily separated from the police forces in fiscal year 2002 went to TSA—nearly all (313 of 316) to become Federal Air Marshals where they were able to earn higher salaries, federal law enforcement retirement benefits, and a type of pay premium for unscheduled duty equaling 25 percent of their base salary. The number (316) of police officers who voluntarily separated from the 13 police forces to take positions at TSA nearly equaled the increase in the total number of separations (354) that occurred between fiscal year 2001 and 2002. About 25 percent (148) of the voluntarily separated officers accepted other federal law enforcement positions, excluding positions at TSA, and about 5 percent (32 officers) took nonlaw enforcement positions, excluding positions at TSA. Furthermore, about 9 percent (51) of the voluntarily separated officers took positions in state or local law enforcement or separated to, among other things, continue their education. Officials were unable to determine where the remaining 9 percent (52) of the voluntarily separated officers went. Figure 1 shows a percentage breakdown of where the 599 officers who voluntarily separated from the 13 police forces during fiscal year 2002 went. Although we did not survey individual officers to determine why they separated from these police forces, officials from the 13 forces reported a number of reasons that officers had separated, including to obtain better pay and/or benefits at other police forces, less overtime, and greater responsibility. Without surveying each of the 599 officers who voluntarily separated from their police forces in fiscal year 2002, we could not draw any definitive conclusions about the reasons they left. Data we gathered from the 13 police forces since we issued our report indicate that fiscal year 2003 turnover rates will drop significantly at 12 of 13 forces--even below historical levels at most of the forces—if patterns for the first 9 months of fiscal year 2003 continue for the remaining months. Prospective turnover rates at these 12 forces in fiscal year 2003 range from being 21 to 83 percent lower than fiscal year 2002 levels. In addition, prospective fiscal year 2003 turnover rates at 8 of the 13 forces are below historical levels. The use of human capital flexibilities to address turnover varied among the 13 police forces. For example, officials at 4 of the 13 police forces reported that they were able to offer retention allowances, which may assist the forces in retaining experienced officers, and 3 of these police forces used this tool to retain officers in fiscal year 2002. The average retention allowances paid to officers in fiscal year 2002 were about $1,000 at the Pentagon Force Protection Agency, $3,500 at the Federal Protective Service, and more than $4,200 at the NIH Police. The police forces reported various reasons for not making greater use of available human capital flexibilities in fiscal year 2002, including lack of funding for human capital flexibilities, lack of awareness among police force officials that the human capital flexibilities were available, and lack of specific requests for certain flexibilities such as time-off awards or tuition reimbursement. The limited use of human capital flexibilities by many of the 13 police forces and the reasons provided for the limited use are consistent with our governmentwide study of the use of such authorities. In December 2002, we reported that federal agencies have not made greater use of such flexibilities for reasons such as agencies’ weak strategic human capital planning, inadequate funding for using these flexibilities given competing priorities, and managers’ and supervisors’ lack of awareness and knowledge of the flexibilities. We further stated that the insufficient or ineffective use of flexibilities can significantly hinder the ability of agencies to recruit, hire, retain, and manage their human capital. Additionally, in May 2003, we reported that OPM can better assist agencies in using human capital flexibilities by, among other things, maximizing its efforts to make the flexibilities more widely known to agencies through compiling, analyzing, and sharing information about when, where, and how the broad range of flexibilities are being used, and should be used, to help agencies meet their human capital management needs. Entry-level pay and retirement benefits varied widely across the 13 police forces. Annual pay for entry-level police officers ranged from $28,801 to $39,427, as of September 30, 2002. Officers at 4 of the 13 police forces received federal law enforcement retirement benefits, while officers at the remaining 9 police forces received standard federal employee retirement benefits. According to officials, all 13 police forces performed many of the same types of general duties, such as protecting people and property and screening people and materials entering and/or exiting buildings under their jurisdictions. The minimum qualification requirements and the selection processes were generally similar among most of the 13 police forces. At $39,427 per year, the U.S. Capitol Police, Library of Congress Police, and Supreme Court Police forces had the highest starting salaries for entry-level officers, while entry-level officers at the NIH Police and Federal Protective Service received the lowest starting salaries at $28,801 per year. The salaries for officers at the remaining 8 police forces ranged from $29,917 to $38,695. Entry-level officers at 5 of the 13 police forces received an increase in pay, ranging from $788 to $1,702, upon successful completion of basic training. Four of the 13 police forces received federal law enforcement retirement benefits and received among the highest starting salaries, ranging from $37,063 to $39,427. Figure 2 provides a comparison of entry-level officer pay and retirement benefits at the 13 police forces. Entry-level officers at 12 of the 13 police forces (all but the U.S. Postal Service Police) received increases in their starting salaries between October 1, 2002, and April 1, 2003. Entry-level officers at three of the four police forces (FBI Police, Federal Protective Service, and NIH Police) with the lowest entry-level salaries as of September 30, 2002, received raises of $5,584, $4,583, and $4,252, respectively, during the period ranging from October 1, 2002, through April 1, 2003. In addition, entry-level officers at both the U.S. Capitol Police and Library of Congress Police—two of the highest paid forces—also received salary increases of $3,739 during the same time period. These pay raises received by entry-level officers from October 1, 2002, through April 1, 2003, narrowed the entry-level pay gap for some of the 13 forces. For example, as of September 30, 2002, entry- level officers at the FBI Police received a salary $8,168 less than an entry- level officer at the U.S. Capitol Police. However, as of April 1, 2003, the pay gap between entry-level officers at the two forces had narrowed to $6,323. Officers at the 13 police forces reportedly performed many of the same types of duties, such as protecting people and property, patrolling the grounds on foot, and conducting entrance and exit screenings. Police force officials also reported that officers at all of the police forces had the authority to make arrests. Although there are similarities in the general duties, there were differences among the police forces with respect to the extent to which they performed specialized functions. We have observed in our recent Performance and Accountability Series that there is no more important management reform than for agencies to transform their cultures to respond to the transition that is taking place in the role of government in the 21st century. Establishing the new DHS is an enormous undertaking that will take time to achieve in an effective and efficient manner. DHS must effectively combine 22 agencies with an estimated 160,000 civilian employees specializing in various disciplines, including law enforcement, border security, biological research, computer security, and disaster mitigation, and also oversee a number of non- homeland security activities. To achieve success, the end result should not simply be a collection of components in a new department, but the transformation of the various programs and missions into a high performing, focused organization. Implementing large-scale change management initiatives, such as establishing a DHS, is not a simple endeavor and will require the concentrated efforts of both leadership and employees to accomplish new organizational goals. We have testified previously that at the center of any serious change management initiative are the people—people define the organization’s culture, drive its performance, and embody its knowledge base. Experience shows that failure to adequately address—and often even consider—a wide variety of people and cultural issues is at the heart of unsuccessful mergers and transformations. Recognizing the “people” element in these initiatives and implementing strategies to help individuals maximize their full potential in the new organization, while simultaneously managing the risk of reduced productivity and effectiveness that often occurs as a result of the changes, is the key to a successful merger and transformation. Chairwoman Davis, today you are releasing a report that we prepared at your and Senator Voinovich’s request that identifies the key practices and specific implementation steps with illustrative private and public sector examples that agencies can take as they transform their cultures to be more results-oriented, customer-focused, and collaborative in nature. DHS could use these practices and steps to successfully transform its culture and merge its various originating components into a unified department. (See table 3.) As Secretary Ridge and his leadership team will recognize, strategic human capital management is a critical management challenge for DHS. In our report on homeland security issued last December, we recommended that OPM, in conjunction with the Office of Management and Budget and the agencies, should develop and oversee the implementation of a long- term human capital strategy that can support the capacity building across government required to meet the objectives of the nation’s efforts to strengthen homeland security. With respect to DHS, in particular, this strategy should establish an effective performance management system, which incorporates the practices that reinforce a “line of sight” that shows how unit and individual performance can contribute to overall organization goals; provide for the appropriate use of the human capital flexibilities granted to DHS to effectively manage its workforce; and foster an environment that promotes employee involvement and empowerment, as well as constructive and cooperative labor management employee relations. In response to these recommendations, the Director of OPM stated that OPM has created a design process that is specifically intended to make maximum use of the flexibilities that Congress has granted to DHS, including the development of a performance management system linking individual and organizational performance. Chairwoman Davis, at your and Senator Voinovich’s request, we are reviewing the design process DHS and OPM have put in place and we expect to issue our first report this September. DHS must also consider differences in pay, benefits, and performance management systems of the employee groups that were brought into DHS. Last March, the Secretary of Homeland Security highlighted examples of such differences. For example, basic pay is higher for Secret Service Uniformed Division officers than for General Schedule police officers. TSA uses a pay banding system with higher pay ranges than the General Schedule system. The Secretary also cited differences in benefits. The Secret Service Uniformed Division officers and TSA Air Marshals are covered under the law enforcement officer retirement benefit provisions, while the Federal Protective Service police and law enforcement security officers and various Customs Service employees, among others, are not. Further, the Secretary stated that DHS and OPM employees will determine if the differences in pay and benefits constitute unwarranted disparities and if so, they will make specific recommendations on how these differences might be eliminated in DHS’s human resources management system proposal, which will be submitted later this year. The performance management systems among DHS components also have significant differences that need to be considered. The performance management systems vary in fundamental ways. Of the 4 largest agencies joining DHS, the Customs Service’s and TSA’s performance management systems have 2-level performance rating systems. We have raised concerns that such approaches may not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. The Coast Guard has a 3-level system and Immigration and Naturalization Service has a 5-level system. One of the key practices mentioned above to a successful merger and transformation is to use the performance management system to define the responsibility and assure accountability for change. An effective performance management system can be a strategic tool to drive internal change and achieve desired results. Effective performance management systems are not merely used for once- or twice-yearly individual expectation setting and rating processes, but are tools to help the organization manage on a day-to-day basis. These systems are used to achieve results, accelerate change, and facilitate two-way communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. The performance management system must link organizational goals to individual performance and create a line of sight between an individual’s activities and organizational results. Chairwoman Davis, at your and Senator Voinovich’s request, we identified a set of key practices that federal agencies could use to create this line of sight and develop effective performance management systems. These practices helped public sector organizations both in the United States and abroad create a line of sight between individual performance and organizational success and, thus, transform their cultures to be more results-oriented, customer-focused, and collaborative in nature. DHS has the opportunity to develop a modern, effective, and credible performance management system to manage and direct its transformation. DHS should consider these key practices as it develops a performance management system with the adequate safeguards, including reasonable transparency and appropriate accountability mechanisms in place, to help create a clear linkage between individual performance and organizational success. We recently reported that TSA, one of the components that joined DHS, has taken the first steps in creating such a linkage and establishing a performance management system that aligns individual performance expectations with organizational goals. TSA has implemented standardized performance agreements for groups of employees, including transportation security screeners, supervisory transportation security screeners, supervisors, and executives. These performance agreements include both organizational and individual goals and standards for satisfactory performance that can help TSA show how individual performance contributes to organizational goals. For example, each executive performance agreement includes organizational goals, such as to maintain the nation’s air security and ensure an emphasis on customer satisfaction, as well as individual goals, such as to demonstrate through actions, words, and leadership, a commitment to civil rights. To strengthen its current executive performance agreement and foster the culture of a high-performing organization, we recommended that TSA add performance expectations that establish explicit targets directly linked to organizational goals, foster the necessary collaboration within and across organizational boundaries to achieve results, and demonstrate commitment to lead and facilitate change. TSA agreed with this recommendation. Madam Chairwoman and Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For further information, please call me or Weldon McPhail at (202) 512-8777. Other key contributors to this testimony were Carole Cimitile, Katherine Davis, Geoffrey Hamilton, Janice Lichty, Michael O’Donnell, Lisa Shames, Lou Smith, Maria Strudwick, Mark Tremba, and Gregory H. Wilmoth. Federal Uniformed Police: Selected Data on Pay, Recruitment, and Retention at 13 Police Forces in the Washington, D.C., Metropolitan Area (GAO-03-658, June 13, 2003). Review of Potential Merger of the Library of Congress Police and/or the Government Printing Office Police with the U.S. Capitol Police (GAO-02-792R, July 5, 2002). Federal Retirement: Benefits for Members of Congress, Congressional Staff, and Other Employees (GAO/GGD-95-78, May 15, 1995). Capitol Police: Administrative Improvements and Possible Merger With the Library of Congress Police (GAO/AFMD-91-28, Feb. 28, 1991). Recruitment and Retention: Inadequate Federal Pay Cited as Primary Problem by Agency Officials (GAO/GGD-90-117, Sept. 11, 1990). Report of National Advisory Commission on Law Enforcement (OCG-90-2, Apr. 25, 1990). Federal Pay: U.S. Park Police Compensation Compared With That of Other Police Units (GAO/GGD-89-92, Sept. 25, 1989). Compensation And Staffing Levels Of the FAA Police At Washington National And Washington Dulles International Airports (GAO/GGD-85-24, May 17, 1985). Results-Cultures: Implementation Steps to Assist Mergers and Organizational Transformations (GAO-03-669, July 2, 2003). Human Capital: Opportunities to Improve Executive Agencies' Hiring Processes (GAO-03-450, May 30, 2003). Human Capital: OPM Can Better Assist Agencies in Using Personnel Flexibilities (GAO-03-428, May 9, 2003). Human Capital: Selected Agency Actions to Integrate Human Capital Approaches to Attain Mission Results (GAO-03-446, Apr. 11, 2003). Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success (GAO-03-488, Mar. 14, 2003). Human Capital: Effective Use of Flexibilities Can Assist Agencies in Managing Their Workforces (GAO-03-2, Dec. 6, 2002). Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland security and Other Federal Agencies (GAO-03-293SP, Nov. 14, 2002). Highlights of a GAO Roundtable: The Chief Operating Officer Concept: A Potential Strategy To Address Federal Governance Challenges (GAO-03-192SP, Oct. 4, 2002). Results-Oriented Cultures: Using Balanced Expectations to Manage Senior Executive Performance (GAO-02-966, Sept. 27, 2002). Results-Oriented Cultures: Insights for U.S. Agencies from Other Countries’ Performance Management Initiatives (GAO-02-862, Aug. 2, 2002). A Model of Strategic Human Capital Management (GAO-02-373SP, Mar. 15, 2002). Human Capital: Practices That Empowered and Involved Employees (GAO-01-1070, Sept. 14, 2001). FBI Reorganization: Progress Made in Efforts to Transform, but Major Challenges Continue (GAO-03-759T, June 18, 2003). Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues (GAO-03-715T, May 8, 2003). High-Risk Series: Strategic Human Capital Management (GAO-03-120, Jan. 1, 2003). Major Management Challenges and Program Risks: Department of Justice (GAO-03-105, Jan. 2003). Homeland Security: Management Challenges Facing Federal Leadership (GAO-03-260, Dec. 20, 2002). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Many federal agencies in the Washington, D.C., metropolitan area have their own police forces to ensure the security and safety of the persons and property within and surrounding federal buildings. In the executive branch, for example, the Secret Service has over 1,000 uniformed officers protecting the White House, the Treasury Building, and other facilities used by the Executive Office of the President. The Interior Department's Park Police consists of more than 400 officers protecting parks and monuments in the area. The Pentagon Force Protection Agency has recently increased its force to over 400 officers. Even the Health and Human Services Department maintains a small police force on the campus of the National Institutes of Health (NIH) in Bethesda, Maryland. In addition, there are federal uniformed police forces in both the Legislative and Judicial Branches of the federal government. We have continued to examine the transformation of 22 agencies with an estimated 160,000 civilian employees into the Department of Homeland Security. After the terrorist attacks of September 11, 2001, and the government's subsequent efforts to increase airline security, many of these local police forces began experiencing difficulties in recruiting and retaining officers. Police force officials raised concerns that the newly created Transportation Security Administration (TSA) and its Federal Air Marshal Program were luring many prospective and experienced officers by offering better starting pay and law enforcement retirement benefits. Former Congresswoman Morella asked us to look into these concerns. Most forces reported experiencing recruitment difficulties. Officials at 8 of the 13 forces told us they experienced moderate to very great recruiting difficulties. Despite this, none of the 13 forces used available human capital flexibilities, such as recruitment bonuses or student loan repayments in fiscal year 2002, to try to improve their recuiting efforts. In fiscal year 2002, many of the local forces experienced sizable increases in turnover, mostly due to voluntary separations. About half of the officers who left voluntarily went to the TSA. Some of the forces provided retention allowances and incentive awards to try to retain more of their officers. Entry-level pay at the 13 agencies during fiscal year 2002 ranged from $28,801 to $39,427, a gap that narrowed for some of the forces in fiscal year 2003 because officers at 12 of the 13 agencies received increased entry-level pay. However, information we have gathered since we issued our report indicates that turnover in most of the police forces has dropped significantly during fiscal year 2003. The increase in turnover that occurred at 12 of the 13 police forces during fiscal year 2002 appears to be associated with the concurrent staffing of the TSA Federal Air Marshal Program. TSA's hiring of air marshals during fiscal year 2003 has been pared back.
VA manages its intramural research program through ORD. According to ORD’s 2009 to 2014 strategic plan, ORD has 10 research priority areas, which are topics of research that are considered important to VA. The research priority areas are the health care needs of veterans who have served in Operation Enduring Freedom and Operation Iraqi Freedom, aging-related conditions, mental health care and well-being, chronic diseases, long-term care and caregiving, deployment-related exposure to hazardous environmental agents, equity in care, access in rural areas, women’s health, and personalized medicine. According to VA officials, all of these research priority areas could include PTSD research. VA funds intramural research through the following: VA’s Merit Review Program: This program supports research studies typically conducted by one VA investigator at one VA facility and is administered by ORD’s four research and development services, each of which has a different research focus. (See table 1.) Each research and development service is responsible for soliciting, reviewing, selecting, and funding research proposals submitted to the service. VA’s CSP: This program, which is administered by Clinical Science, funds larger-scale, multisite clinical trials and epidemiological research studies on key diseases that impact veterans. The Merit Review Program has research award funding limits, which are set by VA. In some cases, intramural research awards may only be funded for a certain number of years. See table 2 for more information. In addition to individual studies conducted at VA facilities, VA has several research centers and programs that conduct or support PTSD research. For example, the National Center for PTSD focuses on PTSD research. VA also has Research Enhancement Award Programs, which help support PTSD research by providing staff and other resources to investigators. (For more information on VA research centers and research programs that conduct or support PTSD research, see app. I.) According to a VA official from the National Center for PTSD, VA does not fund most of the PTSD research that is being conducted today. Intramural research proposals may be service-directed—solicited by ORD on specific topics—or investigator-initiated— submitted by investigators to ORD on their own initiative. Investigators submit proposals either in response to a request for proposals on a specific topic (for service- directed proposals) or to an open request for proposals (for investigator- initiated proposals). For both the Merit Review Program and CSP, proposals are typically evaluated in two review cycles per year. To be considered for intramural research funding: The proposal must be veteran-centric. The proposal must have received approval from the director of the medical center and the research and development office of the medical center where the lead investigator, known as a principal investigator, is based. The principal investigator and any coprincipal investigators must demonstrate a primary professional commitment to VA, as demonstrated by at least a 5/8 time VA appointment at the time the funding is awarded and previous VA experience, including experience in research and patient care. Research must be conducted primarily on VA premises. The principal investigator and any coprincipal investigators must have designated research space within a VA medical center. Overall intramural PTSD research funding from VA’s medical and prosthetic research appropriation increased from $9.9 million in fiscal year 2005 to $24.5 million in fiscal year 2009. The number of intramural PTSD research studies funded through the Merit Review Program and CSP increased from 47 in fiscal year 2005 to 96 in fiscal year 2009. Based on the VA data we obtained and summarized, we found that overall intramural PTSD research funding from VA’s medical and prosthetic research appropriation increased from about $9.9 million in fiscal year 2005 to about $24.5 million in fiscal year 2009, or by about 150 percent (see fig. 1). Overall intramural PTSD research funding included funding for specific PTSD studies as well as for other PTSD research-related funding, such as career development awards provided to junior VA investigators to conduct PTSD studies, salaries for VA investigators who are not VA clinicians, funding for PTSD research conducted within ORD research centers, and PTSD research meetings. Of the $80.2 million provided for PTSD studies from fiscal year 2005 through fiscal year 2009, $51.3 million, or about 64 percent, was for studies funded through the Merit Review Program. The remaining approximately $28.9 million, or about 36 percent, was for CSP studies. (See fig. 2.) From fiscal year 2005 through fiscal year 2009, intramural PTSD research funding ranged from 2.5 percent to 4.8 percent of VA’s medical and prosthetic research appropriation. (See table 3 for VA intramural PTSD research funding and VA’s medical and prosthetic research appropriations from fiscal year 2005 through fiscal year 2009.) For comparison, according to a 2009 report prepared by ORD staff for VA’s National Research Advisory Council, for fiscal year 2009, funding for intramural traumatic brain injury research was about $14.6 million, 2.9 percent of the medical and prosthetic research appropriation. Funding for spinal cord injury research was $27.2 million, 5.3 percent of the medical and prosthetic research appropriation. Funding for intramural cardiovascular disease and stroke research was $53.1 million, 10.4 percent of the medical and prosthetic research appropriation. Similarly, we found that the number of PTSD studies funded from VA’s medical and prosthetic research appropriations through VA’s intramural research program increased from fiscal year 2005 through fiscal year 2009. (See fig. 3.) Specifically, in fiscal year 2005, 47 intramural PTSD research studies were funded while in fiscal year 2009, 96 intramural PTSD research studies were funded. This represented an increase of more than 100 percent. Of all the studies funded each fiscal year, only a small number were CSP studies. According to VA officials, intramural research proposals, including those on PTSD, are reviewed and funded in VA’s Merit Review Program and VA’s CSP primarily according to scientific merit. VA’s Merit Review Program Intramural research proposals submitted to VA’s Merit Review Program are reviewed through a series of steps prior to funding. See figure 4 for an overview of the submission, review, and funding process for proposals submitted to the Merit Review Program. (For more detailed information on this process, see app. II.) First, investigators submit proposals electronically. Investigators typically submit Merit Review Program proposals to grants.gov, the government’s central grant identification and proposal portal, in response to a request for proposals. Submitted proposals are then transferred to eRA Commons, an electronic system for grant administration functions, for VA processing and review. Second, each proposal is assigned to a merit review panel for evaluation. Each merit review panel reviews proposals in a specific research topic area, and is composed of panelists, typically associate-level professors, who are selected based on their expertise in this area. According to VA documents, as of 2010, there were a total of 35 merit review panels across VA’s research and development services. The merit review panels evaluate each proposal based on its scientific merit. Panelists consider several criteria in evaluating the overall scientific merit of a proposal. (See table 4 for criteria used to determine scientific merit.) Third, the merit review panelists score the proposals to determine their rank. Each panelist provides a score to each of the proposals reviewed by the panel. The scores are averaged to create a “priority score” for the proposal. (See app. II for specific scoring guidelines given to panelists in all research and development services.) All proposals scored by the merit review panel are then ranked by priority score among all of the proposal scores recently assigned by the merit review panel. The rank of the proposal is used to determine the “percentile” of the proposal. Finally, research and development service directors determine how many proposals to fund. All of the proposals scored by all merit review panels in a research and development service in the review cycle are ranked together by their percentiles to be considered for funding. According to VA officials, research and development service directors typically fund up to the 25th percentile of proposals in a review cycle, beginning with those with the most scientific merit, although the number of proposals funded may vary depending on the budget. According to VA, research and development service directors may also choose to fund a small number of additional proposals at the margin that respond to research priority areas. For example, if the fundable range determined by a research and development service director was up to the 25th percentile, proposals at the 26th percentile related to research priority areas could also be considered for funding. VA’s Cooperative Studies Program VA intramural research proposals submitted to CSP are reviewed and scored in a process similar to that of the Merit Review Program prior to consideration for funding. To help develop the CSP proposal, investigators are assisted by members of a CSP center, a VA entity that provides guidance and support for research across multiple sites. (See fig. 5 for an overview of the process for submitting, reviewing, and funding a CSP proposal. For more information on ORD’s CSP review process, see app. III.) Before submitting a research proposal, investigators submit a letter of intent, or a preliminary outline of a proposal, to the Director of Clinical Science to be approved for planning a CSP proposal. Based on the merit of the letter of intent, as determined by three or more reviewers, the Clinical Science Director decides whether to fund planning efforts to develop a CSP proposal. When the principal investigator receives approval to begin planning efforts, the Clinical Science Director assigns a CSP center to provide statistical and methodological guidance to the investigator. The director of the CSP center designates a project manager and methodologist, such as a person with expertise in biostatistics, to provide guidance to the principal investigator. The Clinical Science Director, with recommendations from the principal investigator, then forms a planning committee of additional experts to assist in developing a CSP proposal. The planning committee develops a CSP proposal over the course of two planning meetings. Once a proposal is developed, the CSP center, on behalf of the principal investigator, submits a hard copy proposal to the CSP central office for evaluation by the Cooperative Studies Scientific Merit Review Board. This board consists of reviewers who have extensive experience in clinical research and the conduct of clinical trials or epidemiology studies. Reviewers evaluate CSP proposals based on scientific merit. According to VA, the scientific merit of a CSP proposal is defined by the importance of the proposal, its feasibility, the clarity and achievability of its objectives, the adequacy of the plan of investigation, the correctness of the technical details, and the adequacy of the safeguards for the welfare of the patients. Based on these criteria, reviewers discuss the general scientific merit of the proposal. Reviewers vote on whether to unconditionally approve, conditionally approve, reject or defer with recommendation for resubmittal, or reject each proposal. (See table 5 for an overview of funding recommendations provided by the board.) After the reviewers vote, they each provide scores for a proposal recommended for funding based on scientific merit. The scores are then averaged to provide a priority score for a proposal. Finally, the Clinical Science Director considers the priority scores of all the proposals in that review cycle and selects the proposals with the strongest priority scores for funding. According to VA officials, the number of proposals funded may vary depending on the budget. The VA/DOD Evidence-Based Practice Work Group, which is responsible for developing and updating all of VA’s CPGs, has a standardized and reproducible process to review all relevant research outcomes when developing or updating all CPGs, including the PTSD CPG. To develop or update a CPG, the VA/DOD Evidence-Based Practice Work Group identifies and assigns a group of VA and DOD clinical leaders and experts who are knowledgeable in the subject area to work on the CPG. Generally, the process to develop or update a CPG consists of the following steps. First, the assigned group of VA and DOD clinical leaders and experts identifies “clinical questions” that will be answered in the CPG. According to VA officials, clinical questions can be either broad or specific. For example, the 2004 PTSD CPG contained clinical questions regarding whether early intervention is more effective than later intervention, and whether certain interventions, such as different psychotherapies, are more effective than others. Second, in order to minimize bias, an external contractor conducts a systematic review of relevant research and selects and summarizes the most methodologically rigorous research studies that are applicable to each of the clinical questions. Third, after receiving summaries of the studies with the highest level of evidence, the VA and DOD group of clinical leaders and experts rates the research using an established grading scheme that considers the level of evidence of each research study—the scope and methodological rigor of an individual study; the overall quality of evidence—the overall quality of all of the research that addresses a particular clinical question, considering the level of evidence of all the studies considered; and the net effect of an intervention—according to the collective results of the studies considered, the intervention’s benefits minus the intervention’s harms. Finally, the assigned group of VA and DOD clinical leaders and experts assigns a grade to each evidence-based recommendation based on an assessment of the overall quality of evidence and the net effect of the intervention. (See app. IV for a detailed description of the process used to develop evidence-based VA/DOD CPGs.) The process for conducting a systematic review of research outcomes to develop or update a CPG is repeated as often as is deemed necessary by the VA/DOD Evidence-Based Practice Work Group according to its written procedures and designated time frames. According to VA/DOD Evidence- Based Practice Work Group documents, routine updates to the CPGs should ideally occur approximately every 2 years. However, updates to CPGs often do not occur every 2 years, and VA officials told us that some CPGs are updated more frequently than others based on availability of resources and priority areas. Additionally, VA officials reported that a CPG will be immediately updated if any evidence-based recommendation contained in it is identified as harmful to patients. According to VA, the VA/DOD Evidence-Based Practice Work Group approved an update to the 2004 PTSD CPG on October 25, 2010, and published the update on VA’s Web site on November 17, 2010. According to VA officials, the systematic process outlined above was used to review all relevant research outcomes and make evidence-based recommendations for PTSD services to both develop and update the PTSD CPG. According to VA officials, the decision to require that cognitive processing therapy and prolonged exposure therapy be made available to veterans diagnosed with PTSD at VA facilities—as indicated in the Handbook, which established certain requirements for mental health services within VA—was based on a review of research outcomes and the availability of existing resources. Review of research outcomes. According to VA, agency officials and qualified subject matter experts reviewed relevant research outcomes and the quality of the research to determine the most efficacious PTSD treatments available when determining which PTSD services to include in the Handbook and make available to veterans. Specifically, VA officials told us that their decision to include cognitive processing therapy and prolonged exposure therapy in the Handbook was influenced by the fact that both of these had been graded as level “A” treatments in the 2004 PTSD CPG (indicating that the intervention is always indicated and acceptable). Furthermore, VA officials said that these two therapies had greater evidence supporting their effectiveness than other PTSD services also graded as level “A” in the 2004 PTSD CPG. In addition, VA officials added that their decision was validated by the results of a VA- commissioned Institute of Medicine study published in 2008 that reviewed the evidence for existing PTSD treatments. According to VA, the study found that cognitive processing therapy and prolonged exposure therapy were considered efficacious treatments for PTSD. While the Institute of Medicine report was released after VA had already decided to include cognitive processing therapy and prolonged exposure therapy in the Handbook, VA officials explained that the Institute of Medicine report was the basis for the decision not to include other PTSD services in the Handbook. Availability of existing resources. VA officials told us that prior to issuing the Handbook in 2008, VA had already begun investing considerable resources to implement national training programs for cognitive processing therapy and prolonged exposure therapy in 2006 and 2007, respectively. VA officials said that they decided to implement the national training programs because VA realized the need to create sufficient capacity so that evidence-based PTSD treatments could be available to veterans throughout the VA system. VA explained that the national training programs were rolled out in advance of the Handbook’s issuance as part of the implementation of VA’s Comprehensive Veterans Health Administration Strategic Plan for Mental Health Services, which called for rapid implementation of evidence-based treatments. VA did this to ensure that it had the capacity to provide cognitive processing therapy and prolonged exposure therapy to all veterans with PTSD for whom these treatments were clinically appropriate. VA officials said that they were able to begin implementing national training programs for cognitive processing therapy in 2006 and prolonged exposure therapy in 2007 because VA had qualified instructors to administer the programs and money available to fund them. Unlike the written and standardized process that the VA/DOD Evidence- Based Practice Work Group established to develop CPGs, VA does not have a formal written process or framework to explain its decision for including cognitive processing therapy and prolonged exposure therapy in the Handbook. VA officials explained that they followed a process when choosing cognitive processing therapy and prolonged exposure therapy, but added that clinical decision-making processes are not typically expected to be documented in a formal manner. VA officials told us that they plan to assess the implementation of the Handbook and will update PTSD requirements in it as needed or as new information or unexpected obstacles arise in the future. VA officials stated that they are currently clarifying the language regarding some of the requirements, but do not plan to revise any of the requirements relating to PTSD services at this time. We provided a draft of this report to VA and received technical comments, which we incorporated into our report as appropriate. We are sending a copy of this report to the Secretary of Veterans Affairs. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. In addition to post-traumatic stress disorder (PTSD) research studies that are conducted by individual Department of Veterans Affairs (VA) investigators, or researchers, VA also funds a number of research centers or programs that conduct or support PTSD research. See table 6 for a description of these VA research centers and programs. Research proposals submitted to the Department of Veterans Affairs’ (VA) Merit Review Program are evaluated in merit review panels that each review proposals in a specific research topic area. Each merit review panel is comprised of panelists, typically associate-level professors, who are selected based on their expertise in the area. Panelists are responsible for scoring proposals based on scientific merit to provide funding recommendations. See figure 6 for a detailed description of the Merit Review Program’s process for reviewing research proposals and table 7 for the Merit Review Program’s scoring guidelines. Research proposals submitted to the Department of Veterans Affairs’ (VA) Cooperative Studies Program (CSP) are reviewed and scored by the Cooperative Studies Scientific Merit Review Board. Reviewers on the board are chosen based on their expertise in clinical or epidemiological research. They typically serve 4-year terms, and ad hoc members can be added depending on specific expertise that may be needed to review a proposal. According to VA, as of October 2010, there were six reviewers on the board. During the research proposal review process, the study team—which includes the lead researcher (referred to as the principal investigator) and a methodologist, such as a person with expertise in biostatistics—has an interactive discussion with the board regarding the proposal. Reviewers evaluate CSP proposals based on scientific merit and provide scores to reflect their funding recommendations. See figure 7 for a detailed description of the review process for CSP research proposals. In 1999, the Department of Veterans Affairs (VA) and the Department of Defense (DOD) formed the VA/DOD Evidence-Based Practice Work Group to issue joint VA/DOD clinical practice guidelines (CPG)—tools that provide guidance and evidence-based recommendations to clinicians regarding the most effective interventions and services for a variety of health care topics. To develop or update a CPG, the VA/DOD Evidence- Based Practice Work Group has a standardized process to ensure that systematic reviews of relevant research outcomes are conducted in order to formulate evidence-based recommendations for prevention, assessment, and treatment services. To develop or update a CPG, the VA/DOD Evidence-Based Practice Work Group identifies two clinical leaders—one from VA and one from DOD— who then help identify not more than 15 to 20 other experts in the subject area to form a “guideline working group.” A member of the VA/DOD Evidence-Based Practice Work Group is also selected to be an evidence chaperone for each CPG to ensure that conformity to prevailing standards for conducting high-quality systematic reviews is upheld. To determine the scope of the CPG, the guideline working group, the evidence chaperone, and a facilitator are responsible for identifying clinical questions that are to be answered by a systematic review of relevant research outcomes. According to VA officials, clinical questions can be both broad and specific. For example, the 2004 post-traumatic stress disorder CPG contained clinical questions regarding whether early intervention is more effective than later intervention and whether certain interventions, such as different psychotherapies, are more effective than others. According to VA, in order to answer these clinical questions, an external evidence center—an entity that conducts systematic reviews of research on a variety of topics—is contracted to collect and review all relevant research (including, but not limited to, VA- and DOD-sponsored research) to assess its applicability to each clinical question under consideration using explicit and reproducible methods. The evidence center then focuses its review on the best available research, that is, high-quality, methodologically rigorous studies that address health issues that impact VA and DOD populations and consider the effectiveness as well as the harms and benefits of the intervention at issue. According to VA officials, the evidence center provides summaries of only the best available research to the guideline working group for review. After receiving the summaries, the guideline working group reviews the research in sequential steps using an established rating scheme developed by the U.S. Preventive Services Task Force to formulate evidence-based recommendations. See figure 8 for an overview of the steps that the guideline working group uses to formulate evidence-based recommendations. Level of evidence. First, the guideline working group reviews the summaries to identify the level of evidence, or the level of methodological rigor. For example, research studies that have the highest quality are categorized as “I” (indicating at least one properly done randomized controlled trial), while research studies of the lowest quality are categorized as “III” (indicating that the research reflects the opinion of respected authorities, descriptive studies, case reports, and expert committees). (See table 8.) Overall quality of research. After determining the level of evidence of individual research studies, the guideline working group makes a determination regarding the overall quality of all of the research that addresses a particular clinical question. The overall quality takes into account the number, quality, and size of all of the individual research studies together as well as the consistency of the results between research outcomes to determine the collective overall strength of the research. Based on this review, the guideline working group determines the overall quality of the evidence to be good, fair, or poor. (See table 9.) Net effect of the intervention. For interventions that were supported by studies of “fair” or “good” overall quality, the guideline working group evaluates the benefits and the potential harms to determine the net effect of the intervention. The net effect of an intervention takes into account the benefits of the intervention minus the harms to determine the overall potential clinical benefit that the intervention may provide to patients. The net effect of the intervention ranges from “substantial” (meaning the benefit substantially outweighs the harm) to “zero or negative” (meaning it has no impact or a negative impact on patients). (See table 10.) Grade of evidence-based recommendation. In the final step, the guideline working group uses its assessment of the overall quality of the evidence and the net effect of the intervention to grade evidence-based recommendations. (See table 11.) In addition to the contact named above, Mary Ann Curran, Assistant Director; Susannah Bloch; Stella Chiang; Martha R. W. Kelly; Melanie Krause; Lisa Motley; Michelle Paluga; Rebecca Rust; and Suzanne Worth made key contributions to this report. VA Health Care: Progress and Challenges in Conducting the National Vietnam Veterans Longitudinal Study. GAO-10-658T. Washington, D.C.: May 5, 2010. VA Health Care: Status of VA’s Approach in Conducting the National Vietnam Veterans Longitudinal Study. GAO-10-578R. Washington, D.C.: May 5, 2010. VA Health Care: Preliminary Findings on VA’s Provision of Health Care Services to Women Veterans. GAO-09-899T. Washington, D.C.: July 16, 2009. DOD and VA Health Care: Challenges Encountered by Injured Servicemembers during Their Recovery Process. GAO-07-589T. Washington, D.C.: March 5, 2007. VA Health Care: Spending for Mental Health Strategic Plan Initiatives Was Substantially Less Than Planned. GAO-07-66. Washington, D.C.: November 21, 2006. VA Health Care: Preliminary Information on Resources Allocated for Mental Health Strategic Plan Initiatives. GAO-06-1119T. Washington, D.C.: September 28, 2006. VA Health Care: VA Should Expedite the Implementation of Recommendations Needed to Improve Post-Traumatic Stress Disorder Services. GAO-05-287. Washington, D.C.: February 14, 2005. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004.
In addition to providing health care to veterans, the Department of Veterans Affairs (VA) funds research that focuses on health conditions veterans may experience. According to VA, experts estimate that up to 20 percent of Operation Enduring Freedom and Operation Iraqi Freedom veterans have experienced post-traumatic stress disorder (PTSD) and demand for PTSD treatment is increasing. Because of the importance of research in improving the services that veterans receive, GAO was asked to report on VA's funding of PTSD research, and its processes for funding PTSD research proposals, reviewing and incorporating research outcomes into clinical practice guidelines (CPG)--tools that offer clinicians recommendations for clinical services but do not require clinicians to provide one service over another--and determining which PTSD services are required to be made available at VA facilities. To do this work, GAO obtained and summarized VA data on the funding of PTSD research from its medical and prosthetic research appropriation through its intramural research program. GAO also reviewed relevant VA documents, such as those for developing CPGs and those related to VA's 2008 Uniform Mental Health Services in VA Medical Centers and Clinics handbook (Handbook), which defines certain mental health services that must be made available at VA facilities. GAO also interviewed VA officials. Based on VA data GAO obtained and summarized, GAO found that the amount of funding VA provided for intramural PTSD research increased from $9.9 million in fiscal year 2005 to $24.5 million in fiscal year 2009. From fiscal year 2005 through fiscal year 2009, intramural PTSD research funding ranged from 2.5 percent to 4.8 percent of VA's medical and prosthetic research appropriation. In addition, the number of PTSD research studies VA funded through the Merit Review Program and the Cooperative Studies Program (CSP)--VA's two primary funding mechanisms in its intramural research program--increased from 47 in fiscal year 2005 to 96 in fiscal year 2009. According to VA officials, intramural research proposals, including those on PTSD, are funded primarily according to scientific merit in both the Merit Review Program and CSP. Proposals are evaluated by a panel of reviewers and scored based on their scientific merit. Directors of VA's research and development services--offices that focus on different research areas and administer VA's intramural research program--fund proposals based on their scores, typically up to a specified percentile. The number of proposals funded may vary based on budgetary considerations and, for a small number of proposals, responsiveness to VA research priority areas. VA has a process to review and incorporate relevant research outcomes to develop CPGs for a number of topics, including PTSD. VA relies on the policies of a joint VA and Department of Defense (DOD) work group--comprised of VA and DOD officials--to ensure that systematic reviews of relevant research outcomes are conducted when issuing CPGs. In brief, a systematic review is conducted to identify the most methodologically rigorous research studies that are applicable to each clinical question contained in the CPG. A group of subject matter experts then assesses the individual research studies in order to determine the overall quality of evidence available for each particular clinical question, considers the potential benefits and harms of a clinical intervention to determine its net effect, and, based on an assessment of the overall quality of the evidence and the net effect of an intervention, develops recommendations for the CPG. According to VA officials, the decision to require that two PTSD services--cognitive processing therapy and prolonged exposure therapy--be made available at VA facilities by including them in the Handbook was based on a review of research outcomes and the availability of existing resources. Specifically, VA officials told GAO that these two services were strongly recommended in the 2004 PTSD CPG and had greater evidence supporting their effectiveness than other PTSD services. VA also told GAO that prior to the Handbook's 2008 issuance, VA had already begun investing resources in training programs for cognitive processing therapy in 2006 and prolonged exposure therapy in 2007. While VA provided some documentation regarding the decision-making process for PTSD services, VA officials explained that clinical decision-making processes are not typically expected to be documented in a formal manner. VA officials told GAO that they are currently clarifying language in the Handbook but do not plan to revise any requirements relating to PTSD services at this time. VA provided technical comments that GAO incorporated as appropriate.
According to testimony from the Assistant Secretary of Defense for Energy, Installations and Environment, the use of electricity, natural gas, and other utilities is a fundamental characteristic of the nearly 300,000 buildings that DOD owns and operates. These buildings reside on over 500 major installations in the United States and overseas, which provide effective platforms for the training, deployment, redeployment, and support for the military forces that provide for the country’s defense. Installation utilities expenditures are included in the operations and maintenance budget request for Base Operations, and DOD spends a substantial amount of money on utility service. For example, according to DOD, the department spent $4.2 billion on facilities energy in fiscal year 2014. DOD installations obtain utility services in a variety of ways, such as from commercial utility providers or on-site generation. For example, DOD installations typically acquire electricity and natural gas service through a public or private-sector utility provider.also produce some of their own electricity through on-site power generation or through the use of renewable energy projects. For water and wastewater services, DOD maintains and operates wastewater and drinking water treatment facilities on many of its installations. DOD installations may also obtain potable water by purchasing it from a water utility provider as well as from fresh water sources such as wells and streams. In addition, DOD may contract with a local wastewater treatment facility to manage wastewater. Within DOD, the military departments are responsible for installation management, with oversight by the Office of the Assistant Secretary of Defense for Energy, Installations and Environment, who reports to the Under Secretary of Defense for Acquisition, Technology and Logistics. The former office is responsible for—among other things—issuing facility energy policy and guidance to DOD components and coordinating all congressional reports related to facility energy, including the Energy Reports. In addition, each military department is responsible for developing policies and managing programs related to energy and utility management, and has assigned a command or headquarters to execute At the installation level, the public works, general these responsibilities.facilities, or civil engineering departments oversee and manage the day- to-day operations of the utilities. DOD collaborates with various federal agencies to manage the security of crucial utility infrastructure on which DOD relies for utility service. Managing the security of the nation’s critical utility infrastructure requires collaboration among government agencies, industry groups, and private companies. Various federal departments and agencies are designated as sector-specific agencies and play a key role in critical infrastructure security and resilience activities. Specific to the utilities that are the subject of this report, the Department of Energy is the sector-specific agency responsible for the energy sector. The energy sector includes the production, refining, storage, and distribution of oil, natural gas, and electric power, except for commercial nuclear power facilities. In addition, the Environmental Protection Agency is the sector-specific agency responsible for the water and wastewater sector. The Department of Homeland Security, pursuant to Presidential Policy Directive 21, is to coordinate the overall federal effort to promote the security and resilience of the nation’s critical infrastructure from all hazards. For more information on GAO’s previous work examining federal efforts to protect critical infrastructure and recommendations we have made to improve these efforts, see appendix II. According to DOD’s April 2015 Cyber Strategy, the department will work with the Department of Homeland Security to improve cybersecurity of critical infrastructure to protect the U.S. homeland and vital interests from disruptive or destructive cyber attacks. In addition to its role in coordinating federal efforts to protect critical infrastructure, the Department of Homeland Security is responsible for leading efforts to protect the nation’s cyber-reliant critical infrastructures, which includes ICS. One of its means to do this is the Industrial Control System Cyber Emergency Response Team, which has been receiving reports about cyber incidents on federal and civilian ICS since 2009. Figure 1 shows reported cyber incidents in the energy, and water and wastewater, sectors since 2009. On DOD installations, ICS are associated primarily with infrastructure, and consist of computer-controlled electromechanical systems that ensure installation infrastructure services—such as utility service—are delivered when and where required to accomplish the mission. Examples include electric infrastructure, for which ICS control actions such as opening and closing switches; for water pipes, opening and closing valves; and for buildings, operating the heating, ventilation, and air conditioning systems. Thus, many DOD missions depend on the unfailing functioning of ICS and therefore on the security of those systems. Further, DOD’s ICS have become increasingly networked and interconnected with other DOD networks and thereby potentially at risk of cyber intrusion or attack. According to DOD’s April 2015 Cyber Strategy, DOD’s own networks and systems are vulnerable to intrusions and attacks. In addition to DOD’s own networks, a cyber attack on the critical infrastructure and key resources on which DOD relies for its operations could impact the U.S. military’s ability to operate in a contingency. DOD and selected installations reported utility disruptions for fiscal years 2012 through 2014; hazards and threats have the potential to cause utility disruptions, with operational and fiscal impacts. Section 2925 of Title 10 of the United States Code requires DOD to report to Congress on a number of facility energy requirements. One of the required reporting elements is to report on utility disruptions on military installations, including—among other things—the total number and location of utility outages on installations, their financial impact, and mitigation measures. This information is reported in DOD’s annual Energy Reports. DOD components, including the four military services, provide OSD with information on utility disruptions that occurred on their installations in a given fiscal year, which OSD compiles for reporting in the Energy Reports. According to DOD, the June 2013 and June 2014 Energy Reports contain information on disruptions that occurred in fiscal years 2012 and 2013, respectively; that lasted 8 hours or longer; and were the result of interruptions in external, commercial utility service. In its June 2013 Energy Report, DOD reported 87 disruptions and a financial impact of about $7 million for fiscal year 2012. In its June 2014 Energy Report, DOD reported 180 disruptions and a financial impact that averaged about $220,000 per day for fiscal year 2013. At the time of our data collection and analysis, DOD had not issued the Energy Report with utilities disruption data from fiscal year 2014. However, OSD had collected these fiscal year 2014 data from the military services. Figure 2 summarizes the information on the number of utility disruptions reported by the military services to OSD for fiscal years 2012 through 2014. DOD’s Energy reports do not discuss specific examples of utility disruptions and their impacts on installation operations, in part because the statute does not require such examples. Thus, we decided to gather additional information on DOD utility disruptions from 20 installations we selected inside and outside of the continental United States, caused by hazards. As reflected in the figures below, from fiscal year 2012 to fiscal year 2014, utility disruptions on installations in our sample varied in their frequency, duration, the type of utility service disrupted, and the ownership of the utility infrastructure affected. Figures 3 and 4 summarize information on disruptions lasting 8 hours or longer, occurring in fiscal years 2012 through 2104, and reported to us by 18 of the 20 installations in our sample. Of these 20, 18 reported a total of 150 disruptions lasting 8 hours or longer that occurred in fiscal years 2012, 2013, or 2014. Figure 3 provides information on the type and duration of utility disruptions, and the owner of the utility infrastructure involved in the disruption. Figure 4 provides information on the number of disruptions experienced by installations in our sample. Utility disruptions caused by hazards, such as mechanical failure and extreme weather events, have resulted in a number of serious operational and fiscal impacts. Further, both DOD and GAO have noted that climate change increases the likelihood of such events and the department must be prepared for—and have the ability to recover from—utility disruptions that impact mission assurance on its installations. According to officials from the 20 installations we visited or contacted, examples of utility disruptions’ impacts on installations’ operations include the following: In July 2013, two unusually strong thunderstorms downed power lines at Naval Air Weapons Station China Lake, California, causing electrical disruptions of 12 and 20 hours. The installation’s missions include supporting the Navy’s Research, Development, Acquisition, Test and Evaluation mission and providing Navy training capability. Because of these disruptions, the installation lost the ability to conduct 17 mission-related events, including 4 test events and 13 maintenance or training flights. In October through December of 2010 and June of 2013, Vandenberg Air Force Base experienced electrical disruptions due to mechanical failures, resulting in several impacts on installation operations. For example, these disruptions led to key systems being unavailable for space launch operations. Specifically, the disruptions contributed to delaying the launch of one satellite by about 5 days and another by 1 day. In addition, the installation has experienced wildfires. Figure 5 shows fire-damaged utility infrastructure on Vandenberg Air Force Base. In our May 2014 report on DOD’s adaptation to climate change for infrastructure, we found operational impacts of climate change on installations’ utility resilience. For example, according to DOD officials, the combination of thawing permafrost, decreasing sea ice, and rising sea level on the Alaskan coast have led to an increase in coastal erosion at several Air Force radar early warning and communication installations. Installation officials explained that this erosion has damaged a variety of installation infrastructure, including utilities. According to our review of information provided by officials from the 20 installations we visited or contacted, the fiscal impact of utility disruptions can vary. Examples of fiscal impact include the following: In late October and early November of 2012, storm surge from Hurricane Sandy destroyed potable water and wastewater utility infrastructure of a pier at Naval Weapons Station Earle, New Jersey. This damage resulted in a disruption of potable water and wastewater services to docked ships. Disruption of these utility services lasted about 1 month until—according to installation officials—the installation could contract to provide temporary potable water and wastewater services, with a variety of costs for the government. For example, according to an installation official, one contract to provide temporary utility service totaled about $2.8 million. Also, according to Navy documentation, the Navy has estimated that more than $23 million will be required to replace the destroyed infrastructure. Vandenberg Air Force Base has also experienced disruptions of potable water utility service. For example, a November 2014 disruption of water used by a power plant that provides electricity to a launch pad had an estimated repair cost of $15,000. Figure 6 shows the repair of damaged potable water infrastructure on Vandenberg Air Force Base. During unusually cold temperatures in January 2014, the utility company that provides natural gas service to the Army’s Aberdeen Proving Ground, Maryland, implemented a curtailment agreement with the installation. Such agreements allow the utility provider to reduce service during periods of unusually high demand. However, due to mechanical failures, several of the installation’s heating boilers were unable to switch from using natural gas to using fuel oil. As result, the installation was not able to curtail its purchase of natural gas, and was fined almost $2 million by the utility provider. In our May 2014 report on DOD’s adaptation to climate change for infrastructure, we also found fiscal impacts of climate change on installations’ utility resilience. For example, in 2013, Fort Irwin, California, experienced three power disruptions in a span of 45 days. Caused by extreme rain events that created flash flooding, each disruption lasted at least 24 hours. The disruptions limited the effectiveness of instrumentation used to track the training at the National Training Center and provide information used for after-action feedback. To increase future utility resilience, Fort Irwin requested more than $11.5 million for 31 backup generators. In our May 2014 report, we noted that weather-related fiscal impacts on infrastructure may increase in their frequency or severity due to climate change. If so, DOD’s maintenance costs for these weather-related fiscal impacts are likely to increase. Physical and cyber threats also have the potential to cause utility disruptions with impacts on installation operations. According to DOD officials, while there are no known malicious physical acts that have caused utility disruptions on DOD installations lasting 8 hours or longer, such acts have the potential to cause utility disruptions, with resultant impacts on installation operations. For example, according to the Federal Bureau of Investigation and the Pacific Gas & Electric utility company, in April 2013 an individual or individuals cut fiber optic cables and fired over 100 bullets into 13 large transformers located at a California substation operated by the company, damaging the transformers. According to DOD officials, this incident did not result in disruption of electrical service at DOD installations. However, they explained that the incident is an example of the type of utility disruption threat posed by physical terrorism. In addition, based on our review of DOD documents and discussions with DOD officials, the department’s utility infrastructure is also under cyber threat. According to DOD’s April 2015 Cyber Strategy, the global proliferation of malicious code or software, called “malware,” increases the risk to U.S. networks and data. A variety of adversaries can purchase destructive malware and other capabilities on the black market. As cyber capabilities become more readily available over time, DOD assesses that state and nonstate actors will continue to seek and develop cyber capabilities to use against U.S. interests. Further, according to the March 2014 OSD memorandum discussed previously, DOD’s computer networks and systems—including ICS—are under “incessant” cyber attack and damage to or compromise of any ICS may be a mission disabler. For example, according to a briefing provided by an official from the United States Cyber Command, an adversary could gain unauthorized access to ICS networks and attack DOD in a variety of ways. United States Cyber Command officials explained that there are several categories of cyber threats involving a DOD installation’s ICS that have the potential to cause utility disruptions and resulting impacts on installation operations. The first category of cyber threats includes the removal of data from an ICS or a DOD network connected to an ICS. According to OSD’s March 2014 memorandum, a serious mission- disabling event could occur if an ICS was used as a gateway into an installation’s information technology system or possibly DOD’s broader information networks. The second category of cyber threats involves the insertion of false data to corrupt the monitoring and control of utility infrastructure through an ICS. In its March 2014 memorandum, OSD noted that disruption of a computerized chiller controller could deleteriously impact critical military operations and readiness. Figure 7 details an example of a potential cyber attack provided by Navy officials. The third category of cyber threats is the physical destruction of utility infrastructure controlled by an ICS. According to United States Cyber Command officials, this threat—also known as a “cyber-physical effect”— is the threat about which they are most concerned. This is because a cyber-physical incident could result in a loss of utility service or the catastrophic destruction of utility infrastructure, such as an explosion. According to one of the officials, an example of a successful cyber- physical attack through ICS was the Stuxnet computer virus that was used to attack Iranian centrifuges in 2010. Through an ICS, the centrifuges were made to operate incorrectly, causing extensive damage. DOD has a 5-month process to collect and report on utility disruption data, and uses these data in a number of ways. However, the department’s collection and reporting of utilities disruption data are not comprehensive and some data are not accurate. DOD undergoes an annual process to report on utility disruptions in its Energy Reports, collecting data required by Section 2925 of Title 10 of the United States Code—including utility disruption data—for the reports, over a 5 month time period. The overall process, with participation by installations, military service headquarters, and OSD, is detailed in figure 8. According to our review of the June 2013 and June 2014 Energy Reports, other DOD documents, and discussions with an OSD official responsible for planning and implementing utility resilience activities, DOD uses the utility disruption data in a number of ways. First, DOD has analyzed these data to support a review of existing DOD guidance on power resilience at DOD installations that is presently informing the department’s policy. Second, according to an OSD official, DOD can use the utilities disruption data as a baseline to establish trends that inform future strategic planning and policymaking. Further, the official explained that these are the only utility disruption data collected for the Energy Reports, and so are especially important to informing DOD’s utility resilience efforts, noting that it is important for OSD decision making to be driven by analyzing data. Also, the official explained that analyses of the utility disruptions’ average duration could inform decisions about which type of backup power infrastructure is the most cost-effective to install on installations. For example, if the average duration of a disruption is 2 to 3 days, individual generators may be the most cost-effective option. In contrast, if the average duration of a disruption is 7 days or longer, natural gas- powered plants located on installations may be the most cost-effective option. Third, DOD uses the utility disruption data collected from its installations to meet the requirement in Section 2925 of Title 10 of the United States Code to report to Congress on—among other things—the total number and location of utility outages on installations, their financial impact, and mitigation measures. DOD instructions in a template used to collect utility disruption data from installations stipulate that installations should report on external, commercial utility disruptions lasting at least 8 hours. According to officials from the military service headquarters and OSD, they do not review installations’ utilities disruption data to determine whether there are instances that meet the reporting criteria but are not included. Officials from three of the military service headquarters and OSDthat, in fiscal years 2012 through 2014, there were installations that did stated not report on all disruptions that meet these criteria. By comparing the utility disruptions we identified through our independent research to those submitted by the military services to OSD, we confirmed cases of underreporting by installations from all four services, although our comparative analysis does not quantify the extent of underreporting. For example, in fiscal years 2012 and 2013, the Army did not report at least four disruptions, including a 1-week potable water main break at Camp Darby, Italy. Also, in fiscal year 2012 the Navy and Marine Corps did not report at least eight disruptions, seven of which were multiday electrical disruptions that occurred as a result of the June 2012 derecho storm, including a disruption at Marine Corps Base Quantico. Thus, for fiscal year 2012, the number of the Navy and Marine Corps’ unreported disruptions is at least more than double the number of reported disruptions. In addition, for fiscal years 2013 and 2014, the Navy and Marine Corps did not report a total of at least four disruptions. Further, according to instructions in the data collection template, installations are supposed to submit data only on external, commercial utility disruptions, not those associated with DOD-owned utility infrastructure, such as the mechanical failure of a DOD-owned transformer or a potable water pipe bursting. This results in underreporting of disruptions in DOD’s Energy Reports. As noted above, at the 20 installations we visited or contacted, more than 90 percent of disruptions involved DOD-owned infrastructure. Specifically, for fiscal years 2012 to 2014, installations in our sample experienced almost 140 utility disruptions involving DOD infrastructure, which would not be captured in the Energy Reports. According to officials from multiple installations we visited or contacted, aging DOD-owned utility infrastructure contributes to utility disruptions. For instance, Kadena Air Force Base officials explained that “failing” DOD-owned utility infrastructure creates challenges to maintaining support to the installation’s mission. The officials provided one example, noting that some wastewater pipes were cast in 1947 and have been in use for over 65 years. Kadena Air Force Base officials told us that, from 2011 to 2014, the installation experienced at least 40 disruptions of electrical, potable water, and wastewater utility services stemming from DOD-owned infrastructure that officials estimate lasted at least 8 hours. DOD instructions in the data collection template also stipulate that installations should submit costs related to mitigating utility disruptions, such as the cost of generators or fuel on which generators run. In fiscal years 2012, 2013, and 2014, three of the four military services submitted disruption data to OSD that did not include information on mitigation costs. For 194 of those disruptions—or 48 percent of the 404 utility disruptions reported to OSD for that period—installations did not report mitigation costs. Because it is common for DOD installations to have backup generators that provide power during electrical disruptions—and an OSD official stated that the majority of reported disruptions are electrical—it is likely that installations reporting electrical disruptions also experienced costs associated with generators. For instance, Navy officials noted that almost every Navy installation has at least some generators that would run during a disruption and these generators consume fuel that would need to be replaced at a cost. Thus, it is likely that DOD underreported certain costs associated with disruptions such as fuel costs for generators. In addition to underreporting, our review of the fiscal years 2012 through 2014 utilities disruption data submitted by the military services to OSD and discussions with OSD officials show there were inaccuracies in duration and cost data on disruptions reported in DOD’s June 2013 and June 2014 Energy Reports. In regard to the duration of disruptions, three of the four military services included disruptions lasting less than 8 hours in the data they submitted to OSD. In total, the military services submitted 32 disruptions lasting less than 8 hours for fiscal years 2012 through 2014. However, according to an OSD official, the fiscal year 2012 and 2013 disruptions lasting less than 8 hours were included in the data reported in the June 2013 and June 2014 Energy reports, constituting about 12 percent of the 266 disruptions DOD reported. Further, for fiscal years 2012 and 2013, a total of 104 disruptions were submitted with incomplete information on duration. Specifically, these disruptions lacked start and end times. According to our analysis of Air Force disruptions reported to OSD for fiscal year 2012 and OSD information on the number of Air Force disruptions reported in the June 2013 Energy Report, it is likely that the disruptions were included in the data reported in that report. Further, according to OSD officials, the Army disruptions were included in the data reported in the June 2014 Energy Report. The 104 disruptions without complete information on duration account for almost 40 percent of the 266 disruptions that DOD reported for fiscal years 2012 and 2013. There were also inaccuracies regarding the cost of disruptions. As discussed above, DOD instructions in the data collection template stipulate that installations should submit direct costs related to mitigating utility disruptions, such as the cost of generators or fuel for them. The instructions also stipulate that indirect costs related to utility disruptions, such as an installation’s lost productivity, should not be submitted. For fiscal year 2012, the Army submitted costs related to the disruption of electrical utility service at Fort Belvoir, Virginia, as a result of the June 2012 derecho storm. According to the Army’s descriptions of these submissions, a total of $4.63 million was for indirect costs, specifically: lost sales, spoiled inventory (e.g., food, medicine), or lost productivity. However, according to OSD officials, these costs were included in the data reported in the June 2013 Energy Report. This $4.63 million of inaccurately reported indirect costs accounts for 66 percent of the approximately $7 million in total costs reported by DOD for fiscal year 2012. Based on our review of the fiscal year 2014 data submitted by the military services to OSD—and OSD’s data validation efforts—the accuracy of DOD’s data may be improving. For example, based on our review, the services’ fiscal year 2014 data contained some inaccuracies, but there were fewer duration and cost inaccuracies than in the fiscal year 2013 data. Also, OSD’s data validation documentation show OSD removed several inaccurate military service submissions before providing the final fiscal year 2014 data set to the Congress. However, challenges remain in the data collection instructions DOD provides to its installations and in the department’s review and validation of data, which could hinder consistent improvement over time. According to the Standards for Internal Control in the Federal Government, program managers need operational and financial information in order to determine whether they are meeting their agencies’ plans and goals, and to promote the effective and efficient use of resources. Also, in previous work examining how DOD was meeting reporting requirements, we found that complete and accurate data are key to meeting such requirements. In addition, in previous work examining—among other things—DOD’s efforts to effectively implement existing guidance, we found that clear and complete guidance is important to the effective implementation of responsibilities. The standards also emphasize the importance of accurately recording events. Further, according to the standards, managers should continually assess their processes to ensure the processes are updated as necessary. In addition, according to the Project Management Institute’s 2013 guide to project management, standard practices in program management include—among other things—reviewing a process on a regular basis to recommend changes or updates to the process. DOD’s underreporting of some disruptions that met the criteria laid out in DOD reporting instructions, and not including disruptions of DOD-owned utility infrastructure in the Energy Reports, are likely due to two factors related to instructions in DOD’s data collection template for installations. First, the underreporting of disruptions that meet DOD’s criteria is likely due to inconsistent guidance provided to installations. Specifically, headquarters officials from both the Marine Corps and Air Force stated that they provided verbal guidance to their installations to submit disruptions only if the disruptions met service-specific criteria different than those stipulated in DOD’s data collection template. For example, Air Force headquarters officials explained that, for collection of data for fiscal year 2014, they instructed their installations to submit disruptions only if they were not mitigated by back-up utility infrastructure, such as an electrical disruption mitigated by a generator. However, the data collection template does not instruct installations to limit their submissions based on these criteria. Also, based on our review, DOD’s instructions to installations place inconsistent emphasis on electrical and nonelectrical utilities and provide an unclear scope of the data to be submitted. For instance, the instructions begin by listing the electrical, water, and gas utilities on which the installation is supposed to report, but the instructions’ details refer only to disruptions in electrical power. Officials from several installations we visited found these instructions confusing. For example, officials from two of the installations stated that they did not submit information on potable water disruptions due to the confusing nature of the instructions. Second, the instructions in the data collection template stipulate that installations are to submit only external, commercial disruptions because—according to an OSD official—DOD decided to limit the scope of data collection and reporting to external, commercial disruptions. The official explained that when the statutory requirement to collect data on utility disruptions began in fiscal year 2012, DOD’s rationale was that almost all of the electricity used by its installations is provided by non- DOD entities such as external, commercial utility companies. As discussed above, the military service headquarters and OSD take various steps to validate utility disruption data submitted by the installations and military services, respectively, but the time and rigor they commit to reviewing the disruption data are limited, which could affect their comprehensiveness and accuracy. Specifically, according to officials from both the military service headquarters and OSD, the structure of the current process for collecting and reporting data in the Energy Reports gives relatively little time to validate the utilities disruption data. DOD officials explained that, out of the 5-month process for collecting and reporting these data, there are 3–4 weeks in which they review utility disruption data. Also, officials from certain military service headquarters explained that their review of installations’ data looks for clear “outliers” or data that seem incorrect and that they rely on installations to provide accurate data on instances of commercial external utility disruptions and associated mitigation costs. In addition, OSD spends about 2 weeks reviewing all of the data required for the Energy Report, including the disruption data. OSD’s validation efforts include questions for the military services that address individual items submitted by each service. According to an OSD official, the 2 weeks it has allotted to review all of the Energy Report’s data means that it is difficult to verify installation-level information. An OSD official and certain headquarters officials also explained that—in their limited time to validate all of the data included in the Energy Reports—they prioritize validation of other data types above their review of the utilities disruption data. These other types of data represent the 11 other categories of data that DOD is required to include in the Energy Report. According to certain military services headquarters officials, they prioritize validation of other data types above their review of the utilities disruption data because they feel OSD places a higher priority on other data, such as those related to DOD requirements or renewable energy projects. In our review of OSD’s data validation of the military services’ fiscal years 2013 and 2014 data for the Energy Reports, we found that a large majority of the questions are about types of data other than utilities disruption data. As we discussed previously, our sample of 20 installations is nongeneralizable, and so we cannot assume that this trend applies to the universe of DOD’s installations. However, the research conducted on these installations provides valuable insight for our study. For more information on our research methodology, see appendix I. to report, given that 66 percent of the costs DOD included were indirect costs. Because DOD used these data to support an existing utility resilience initiative and may use the data to inform future planning and policymaking, accurate data are especially important to informing DOD’s utility resilience efforts. Third, the limited collection and reporting of utilities disruption data in DOD’s Energy Reports may hamper congressional oversight of DOD utility resilience actions. The military services have taken actions and implemented a number of different pieces of DOD guidance to mitigate the risk of utility disruptions. In addition the military services have begun planning for the implementation of DOD Instruction 8510.01, Risk Management Framework (RMF) for DOD Information Technology (IT), to generally mitigate the risk of cyber incidents on all DOD information technology systems and ICS, but face challenges in implementing this guidance for ICS. Based on our review of DOD documents, and according to officials from installations both inside and outside the continental United States that we visited or contacted, installations have taken various actions to mitigate the effects of disruptions in electrical, potable water, wastewater, and natural gas utility service. 19 of the 20 installations we visited or contacted use backup generators to provide emergency power to certain facilities. For example, Marine Corps Base Camp Pendleton has about 158 facilities with active emergency generators that it utilizes during electrical disruptions. Further, the installation has identified a prioritized order for refueling, the goal of which is to keep the generators operating during emergency situations. At the locations we visited or contacted, installations have taken a number of actions to mitigate risk to potable water and wastewater utility service. For instance, at Wheeler Army Airfield, Hawaii, officials explained that—in the event of an electrical disruption disabling potable water pumps—the installation’s potable water system is fed by water tanks, and certain pump stations have emergency generators. In addition, Vandenberg Air Force Base has a sewage pond that can store up to 3 days’ worth of sewage in the event that the pipes leading to the treatment facility cannot be used. Installations have also developed contingency plans for access to potable water resources in addition to their primary source. Further, certain installations have upgraded their utility infrastructure in order to improve its resilience. According to Naval Weapons Station Earle officials, the potable and wastewater infrastructure, destroyed by Hurricane Sandy, is designed to be stronger and thus more resilient in the face of future extreme storms. Figure 9 shows both the damaged and repaired infrastructure. In addition, installations in our sample have taken steps to plan for emergency situations in which utility service could be disrupted. For example, the Naval Base San Diego, California, emergency management plan has an appendix that addresses potential disruptions in electrical, potable water, and wastewater utility service; includes planned response actions; and lists installation organizations responsible for certain actions. Also, according to officials at Tengan Pier and White Beach in Japan, both installations participate in emergency management exercises that provide them with the opportunity to focus on various utility disruption scenarios, such as an exercise that features a typhoon scenario. Finally, Joint Base Pearl Harbor-Hickam, Hawaii, has an emergency management plan that identifies all emergency resources available at the installation such as portable generators, portable pumps, generators providing power to other utilities (water production facilities, wastewater treatment plant, and lift stations), and information on emergency capabilities and assessment teams. The installations in our sample also are generally taking steps in response to DOD guidance related to utility resilience and have taken steps to mitigate the risk to installations posed by utility disruptions caused by both threats and hazards.headquarters officials, there are several pieces of DOD-wide guidance related to utility resilience. Table 1 summarizes selected DOD guidance and our analysis of implementation efforts by installations in our sample. Examples of actions taken by installations to implement this guidance follow the table. Based on our review of DOD documents and discussions with officials at military service headquarters and installations, implementation efforts include actions such as preparing emergency response plans, conducting vulnerability assessments, and assessing the condition of utility infrastructure. For example, Aberdeen Proving Ground’s emergency response plan identifies utility system vulnerabilities, emergency preparedness requirements, and remedial actions intended to mitigate the risk of potential utility service disruptions. Officials from several locations stated that their installations had undergone various assessments of the vulnerability of utility infrastructure to terrorist attack. Furthermore, officials from Naval Base San Diego and Naval Air Weapons Station China Lake stated that they were conducting a utility inventory and risk assessment, which would assess and rate the condition of the utility and also document the consequences of failure of utility infrastructure. In addition to mitigation actions and implementation of guidance taken at the installation level, DOD has undertaken a number of department-wide initiatives to enhance utility resilience. For example, in 2013, the Assistant Secretary of Defense for Energy, Installations and Environment directed a review of existing DOD guidance on power resilience at DOD installations. While reliable and continuous access to all types of utilities is important to DOD missions, OSD officials stated that they focused this review on power because other utility services may depend on—and many DOD missions specifically rely on—reliable access to power. Officials from the Office of the Assistant Secretary of Defense for Energy, Installations and Environment are currently reviewing the responses from the DOD installations, which were compiled and submitted by each military service, and developing recommendations for power resilience requirements. In addition, DOD has taken—or participated in—efforts to enhance department-wide cybersecurity of ICS. For instance, the United States Cyber Command and the Joint Test and Evaluation Program—under the Director, Operational Test and Evaluation, Office of the Secretary of Defense—initiated a collaborative effort in 2014 to develop a set of procedures to detect, mitigate, and respond to cyber incidents on DOD ICS perpetrated by advanced persistent threat actors, such as nation states. These procedures are intended to be employed by DOD installation personnel such as installation information technology managers and ICS facility engineers. An official from the command stated that the draft procedures will be tested at a joint exercise in June 2015 and expects the procedures to be completed by December 2015. Also, according to our review of documents from the Department of Homeland Security and DOD—and discussions with officials from both agencies—DOD has undertaken efforts to better understand cyber threats to ICS that monitor and control DOD utility infrastructure on which DOD relies. In one example of such efforts, the Idaho National Laboratory—under the direction of the Department of Homeland Security and with participation from DOD—conducted the Aurora Test in 2007. This test demonstrated how catastrophic physical damage can be caused to utility infrastructure—in this case a diesel generator—from a remote location through an adversary’s exploitation of vulnerabilities in the ICS used to monitor and control electrical substations. After the test, the diesel generator was inspected and it was determined that it would not be capable of operation without extensive repairs or a complete overhaul. While not all generators are configured in the fashion of the Aurora Test, U.S. Cyber Command officials stated that the Aurora Test is applicable to DOD generators since some have the same equipment as discussed in the Aurora Test and that cyber methods can be used to misconfigure how this equipment operates causing damage or destruction to the equipment. Figure 10 shows a still photo from a video of the Aurora test. In addition to the guidance mentioned previously, DOD has developed guidance that addresses utility resilience with respect to the cybersecurity of ICS that control and monitor utility systems, and the military services have begun planning for its implementation. In March 2014, the department issued DOD Instruction 8510.01, which establishes the policy for a risk management framework for all DOD information technology, including ICS. DOD Instruction 8510.01 replaces the previous DOD policy for information assurance, the DOD Information Assurance Certification and Accreditation Process, which primarily addressed security related to information technology systems. According to officials, the former accreditation process required that the communication connection between an ICS and a DOD communication network be accredited. However, it did not require ICS to be certified and accredited. DOD officials stated it would be very rare for any organization to have conducted an assessment of the cyber vulnerabilities of an ICS system on a DOD installation because—before DOD’s adoption of DOD Instruction 8510.01—ICS had not been a focus of security assessments. For example, according to a Navy and Marine Corps document, currently most Navy and Marine Corps ICS have very little in the way of security controls and cybersecurity measures in place. According to a March 2014 DOD memorandum, for the first time DOD is now requiring that ICS be made secure against cyber attacks by implementing the Risk Management Framework. cybersecurity threats to ICS—discussed earlier in this report—DOD Instruction 8510.01 directs the DOD Chief Information Officer and the heads of each DOD component to oversee the implementation of the instruction. In addition, DOD Instruction 8510.01 states that DOD component heads must complete tasks such as, among others, conducting an impact-based categorization of existing ICS, assigning qualified personnel to risk management framework roles, and identifying and programming funding for the implementation in budget requests. According to DOD, by implementing DOD Instruction 8510.01, the military services will be able to identify vulnerabilities, adopt cybersecurity controls, and mitigate risks of cyber incidents on ICS that could cause potentially serious utility disruptions. Memorandum from the Acting Deputy Under Secretary of Defense for Installations and Environment, Subject: Real Property-related Industrial Control System Cybersecurity (Mar. 19, 2014). Instruction 8510.01. Air Force, Navy, and Marine Corps officials stated that they have polices that assess the cybersecurity of ICS, but that the policies do not cover the requirements in DOD Instruction 8510.01. In addition, Navy headquarters officials stated that they issued draft guidance in February 2015, which, according to these officials, outlines the Navy’s process for accreditation of ICS cybersecurity per requirements in DOD Instruction 8510.01. Navy, Marine Corps, and Air Force officials stated that they are developing technical capabilities that will assist with the implementation of DOD Instruction 8510.01. For example, Air Force officials are developing a concept called ICSNet, which includes hardware and software designed to monitor ICS operations and provide intrusion-detection capabilities. Further, OSD officials stated that they are refining the Enterprise Mission Assurance Support Service tool, which manages certification and accreditation processes for DOD Instruction 8510.01, to better support ICS-specific requirements. The military services face three challenges—conducting an inventory of existing ICS; finding qualified personnel with the necessary skills to implement the cybersecurity requirements; and identifying funding needed to implement DOD Instruction 8510.01—related to their implementation of cybersecurity guidance for ICS. According to military service officials, the services have not yet implemented DOD Instruction 8510.01 and transitioning to the instruction is a complex and difficult task. Evidence of this difficulty is that—according to officials from the office of the DOD Chief Information Officer—DOD revised the original time frames to transition to DOD Instruction 8510.01 because they were unachievable. Specifically, the original time frames required the military services to transition ICS without a current accreditation to DOD Instruction 8510.01 by September 2014, among other things. DOD’s adjusted time frames allow the services until the second quarter of fiscal year 2018 to implement DOD Instruction 8510.01. According to Army officials, the adjusted time frames will allow the military services additional time to plan for the transition. However, even with the additional time, the services may be challenged to implement DOD Instruction 8510.01. Military service headquarters officials stated that they are still developing an inventory of their services’ respective ICS. DOD Instruction 8510.01 requires that ICS should be categorized based on the potential impact on an organization. As part of this categorization, it is necessary to inventory the ICS and collect information about the system, such as the type of information collected and maintained on the system and technical aspects of the system, such as the type of operating system used. Military officials we spoke with explained that an inventory of ICS is an important tool for managing the various types and locations of ICS on military installations. Navy officials explained that a complete inventory of ICS would help headquarters officials communicate information about updated security vulnerabilities to system owners. However, as of February 2015, none of the military services had a complete inventory of existing ICS. While each service is taking steps to obtain a complete inventory, the data collection process is challenging. For example, the Air Force is planning on issuing a data call to its installations in May 2015 and expects that the process will take 6 months to complete. Currently, Air Force officials stated that they are aware of 280 ICS across the Air Force and estimate that the total number of systems on active-duty Air Force bases is around 1,900. Marine Corps officials stated that they also issued a data call to their installations to collect information on the numbers and types of ICS, but the information that they received was only 80 percent complete. Marine Corps officials explained there are challenges that impeded their ability to collect the information. For example, officials stated that the management of ICS at the installation level is decentralized such that no one individual has visibility over all of the ICS on the installation. Navy officials stated they have an ICS inventory of about 18,000, which includes about 37,000 buildings. Officials stated that obtaining a complete list may be challenging without the authority to address all organizations on Navy installations. In addition, they stated that some tenants on Navy-operated installations do not wish to share information about their ICS. However, if the ICS owned by another service on a joint base—or by a tenant on Navy base—is connected to a Navy network, it may be a cybersecurity risk to the Navy installation. Also, Navy officials stated that it is still unclear which organizations on Navy bases have the responsibility for these types of ICS, and that the Navy will need to overcome these challenges if it is to have a complete ICS inventory. Furthermore, officials from each military service stated that identifying personnel with the appropriate expertise will be a challenge due to a shortage of personnel with experience in both the operation and maintenance of ICS and in cybersecurity. DOD Instruction 8510.01 states that qualified personnel should be assigned to risk management framework roles. According to United States Cyber Command and military services headquarters officials, there are very few personnel that have both the cybersecurity technical skills and the skills regarding the operation and maintenance of ICS. Specifically, the Navy does not have the personnel with expertise to determine the necessary cybersecurity controls for each ICS or to maintain the cybersecurity controls for the ICS once they are in place. Air Force officials stated that the most important issue related to implementation of DOD Instruction 8510.01 for ICS at the installation level is the lack of a qualified staff member assigned the responsibility for ICS cybersecurity. Moreover, officials also identified a lack of available training to provide personnel with the necessary skills. For example, Army and Navy officials stated that the DOD training and certification classes currently available are specific to information technology systems such as desktop computers, and not to ICS. The Marine Corps has begun providing training to a limited number of personnel, but had to use training provided by the Department of Homeland Security’s Industrial Control System Cyber Emergency Response Team. Department of Homeland Security officials stated that they have limited capacity and are not funded or staffed to support the training needs of DOD. Military service headquarters officials also stated there are several funding-related challenges to implement DOD Instruction 8510.01, including that implementation may require significant resources and costs involved in implementation have not been fully identified. DOD Instruction 8510.01 states that it is DOD policy that resources for implementing the DOD Risk Management Framework must be identified and allocated as part of the Defense planning, programming, budgeting, and execution process. For example, a required aspect of implementation is identifying resources to remediate or mitigate vulnerabilities discovered through the assessment process. According to some estimates provided by the military service headquarters officials, implementing DOD Instruction 8510.01 for ICS will require substantial resources. For example, Navy officials estimated that the Navy will need “billions of dollars” to secure ICS over what they characterized as the long term, 10 to 20 years, which involves developing a standardized approach that helps protect ICS and implementing updates to systems so that the systems are operating within current cybersecurity standards.officials, this cost figure also includes all of the necessary training According to the involved and the creation of new positions. In addition, Marine Corps headquarters officials estimate that the cost to implement DOD Instruction 8510.01 could range between $3.8 million to $4.2 million per year for the “first few years” of implementation. The officials stated that these costs include funding to develop the technical capability that is being developed in partnership with the Navy and hiring contractor support to assess ICS against the cybersecurity standards. Further, military service headquarters officials explained that the military services have not yet programmed funding for implementation. For example, Army officials stated that they anticipate including $2.5 million in the fiscal year 2017-2021 budget request to be used in fiscal year 2017 to conduct an inventory of ICS, however budget decisions have not yet been made for these budget years. Further, no funding is programmed for fiscal years 2015 and 2016. Navy officials stated that some tasks related to ICS cybersecurity have been funded using existing funds. For example, funds from the Navy Facilities Engineering Command’s working capital fund were used to pay for some ICS cybersecurity assessments. However, the Navy has not yet specifically programmed funds to implement DOD Instruction 8510.01. In addition, military service officials stated that they have not fully identified the costs involved in implementing DOD Instruction 8510.01 and face challenges in identifying those costs. For example, Army and Marine Corps officials stated that it is difficult to develop an accurate estimate of resources needed to support the implementation of DOD Instruction 8510.01 without a complete inventory and prioritization of ICS, which is not yet complete. Specifically, Marine Corps officials stated that while they have developed an estimate, it is still just their “best guess” based on available information. Furthermore, Air Force officials explained that one of the elements of the overall cost to implement DOD Instruction 8510.01 depends on the costs associated with the technical capability the Air Force is developing in order to implement DOD Instruction 8510.01. However, officials explained that they are still in the early stages of developing the capability and have not fully identified the costs. Without knowing the costs, officials explained that they cannot estimate the overall costs to implement DOD Instruction 8510.01. Challenges with conducting an inventory of existing systems, identifying individuals with the necessary expertise, and programming and identifying funding to implement DOD Instruction 8510.01 may hamper the military services’ abilities to plan for and execute the implementation of DOD Instruction 8510.01 by the March 2018 time frame. For example, if the Air Force’s inventory is not completed until November 2015, it only has 28 months to transition an estimated 1,900 ICS to DOD Instruction 8510.01, which means that almost 70 ICS would need to be accredited each month to meet DOD’s time frames. In addition, given that there are three remaining fiscal years until DOD’s fiscal year 2018 deadline for fully transitioning to DOD Instruction 8510.01, the fact that the military services have not programmed for or fully identified transition costs means that the services may be at risk of not adequately funding key transition tasks. According to DOD’s April 2015 Cyber Strategy, because DOD’s capabilities cannot necessarily guarantee that every cyberattack will be denied successfully, the department must invest in resilient and redundant systems so that it may continue operations in the face of disruptive or destructive cyberattacks on DOD networks. Until DOD Instruction 8510.01 is implemented, DOD installations’ ICS remain vulnerable to exploitation because of a lack of cybersecurity controls. Vulnerabilities in ICS can be exploited by various methods causing loss of data, denial of service, or the physical destruction of infrastructure. For instance, as previously discussed, Stuxnet is an example of a computer worm, a method of cyberattack that can target ICS vulnerabilities. In 2010, Stuxnet targeted ICS used to manage centrifuges in an Iranian nuclear processing facility. According to DOD, the same type of ICS can be found in the critical infrastructure on numerous DOD installations. Without overcoming challenges related to completing inventories, acquiring and training personnel, and identifying and programming for funding, all of which are required under DOD Instruction 8510.01, the military services’ ICS may be vulnerable to cyber incidents that could degrade operations and negatively impact missions. To support its operational missions, DOD depends on reliable access to electrical, potable water, wastewater, and natural gas utility services on its installations. As events of the past few years have demonstrated, this access can be disrupted by hazards such as extreme weather and mechanical failures. These extreme weather events may be further exacerbated by the impacts of climate change. In addition, as we and DOD have noted, utilities are vulnerable to threats from physical and cyber terrorism. Given the possibility of disruptions that result in serious operational impacts, decision makers in DOD and Congress need reliable information on the actual scope of disruptions in order to exercise oversight and ensure that resources are available to take necessary steps at installations and across the department to increase resilience. Without guidance that clarifies the reporting requirements of installations— including the need to fully report on all types of disruptions, including disruptions of nonelectrical utilities—and requires the inclusion of disruptions to DOD-owned utilities, decision makers may lack a comprehensive understanding of the types of utility disruptions on DOD installations. In addition, DOD and the military services have the opportunity to take steps that could improve the comprehensiveness and accuracy of the data they collect, such as assessing the effectiveness of the current 5-month data collection process. Data that are more complete and accurate are important, especially given that DOD has stated that the utility disruption data it collects have been used to support ongoing and future plans for resiliency initiatives. As our report indicates, installations have taken steps to mitigate the impacts of disruptions and increase resilience, with infrastructure that provides redundancy and through the implementation of utility resiliency guidance. However, DOD and the military services face several challenges in supporting the department’s effort to implement its Risk Management Framework for ICS. We recognize that DOD is in the early stages of this effort and that it plans on full implementation. Full implementation is important, since cyber attacks on ICS can lead to the loss of operational data and disruption of utility service. As previously discussed, we have identified long-standing challenges with the government’s cybersecurity efforts. Without taking steps now to conduct an inventory of existing ICS, identify individuals with the expertise needed to implement DOD Instruction 8510.01, and program and identify resources for implementation, the military services risk future delays in their efforts to plan and execute the steps necessary to protect installation infrastructure from utility disruptions that could have direct operational mission impacts. In order to provide DOD and Congress with more comprehensive and accurate information on all types of utility disruptions, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Assistant Secretary of Defense for Energy, Installations and Environment to take the following two actions to provide more consistent guidance to the installations: First, in guidance provided to their installations, the military services should clearly state that all disruptions lasting 8 hours or longer should be reported, regardless of the disruptions’ impact or mitigation. In addition, the military services and OSD should work together to revise the data collection template’s instructions, clarifying that disruptions in all four categories of utility service—electrical, potable water, wastewater, and natural gas—should be reported. Second, the military services and OSD should revise the data collection template’s instructions to include reporting of disruptions caused by DOD-owned utility infrastructure. Also, in order to improve the comprehensiveness and accuracy of certain data submitted by the military services to OSD and reported in the Energy Reports—such as potentially underreported data on mitigation costs and inaccurate data on both disruptions’ duration and cost—we recommend that the Secretary of Defense direct the Secretaries of Army, Navy, and Air Force, the Commandant of the Marine Corps, and the Assistant Secretary of Defense for Energy, Installations and Environment to work together to improve the effectiveness of data validation steps in DOD’s process for collecting and reporting utilities disruption data. For example, the military services and OSD could determine whether more time in the 5-month process should be devoted to data validation and whether equal priority should be given to validating all types of data included in the Energy Reports. Further, in order to minimize the risk of delays in their efforts to implement DOD Instruction 8510.01, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps to address challenges related to inventorying existing ICS, identifying personnel with the appropriate expertise, and programming and identifying funding, as necessary. We provided a draft of this report to DOD and the Department of Homeland Security for review and comment; both departments provided technical comments that we considered and incorporated as appropriate. DOD provided written comments on our recommendations, which are reprinted in appendix III. In its written comments, DOD partially concurred with our first two recommendations (now combined as one recommendation), concurred with two recommendations, and non-concurred with one recommendation. DOD also stated that it did not agree with GAO’s analysis of the comprehensiveness and accuracy of the department’s reporting on utility disruptions in the June 2013 and 2014 Energy Reports. However, as discussed in this report, DOD’s collection and reporting of utilities disruption data are not comprehensive and some data are not accurate. For instance, in regard to comprehensiveness, we confirmed cases of installations in each military service that did not report on the commercial, external disruptions on which they are directed to report by DOD reporting guidance. Also, in regard to accuracy, there were inaccuracies in duration and cost data on disruptions reported by DOD. For example, more than 100 disruptions without complete information on duration account for almost 40 percent of the disruptions that DOD reported in the June 2013 and 2014 Energy Reports. Our first recommendation—aimed at providing DOD and Congress with more comprehensive and accurate information on all types of utility disruptions—originally appeared as two recommendations in the draft report provided to DOD for comment. Based on that draft, DOD partially concurred, asking us to consider combining the two recommendations, because they both impact DOD guidance. DOD’s suggested combination of our first and second recommendations—as written in the department’s response—meets the intent of the original two recommendations. Thus, we have combined them into one recommendation, and in subsequent conversations with DOD, an OSD official confirmed that the department concurs with the combined recommendation. DOD’s written responses did not provide information on the timeline or specific actions it plans to take to implement our recommendations. In regard to our recommendation originally appearing third—that OSD and the military services revise the data collection template’s instructions to include reporting of disruptions caused by DOD-owned infrastructure— DOD did not concur. The department stated that reporting on these disruptions provides a “low value proposition;” the data collected by the department for the Energy Reports is not being used to guide its strategic decisions; and collecting the data would be “onerous.” We disagree that collecting data on utility disruptions caused by DOD-owned infrastructure would be of low value. As discussed in the report, our research indicates that DOD-owned infrastructure, which DOD controls, may play a larger role in disruptions than indicated by the Energy Reports, which only address external, commercial disruptions involving equipment over which DOD has little control. For example, the installations we visited or contacted reported disruptions involving DOD infrastructure with significant impacts, such as delayed satellite launches at Vandenberg Air Force Base and almost $26 million in estimated repair costs at Naval Weapons Station Earle. In addition, DOD stated that the data we collected on utility disruptions caused by DOD-owned infrastructure only confirm trends in the data on external, commercial disruptions already collected by DOD. However, we continue to believe its Energy Reports may be missing a substantial number of disruptions by not including disruptions caused by DOD-owned infrastructure. Our analysis found that more than 85 percent of utility disruptions in our sample involved DOD-owned infrastructure on which DOD does not report in the Energy Reports. Further, the department stated that the utility disruption data it collects for the Energy Reports is not being used to guide strategic decisions. However, as previously discussed in our report, DOD has used utility disruption data collected for the Energy Reports to support a DOD- wide utility resilience initiative. This was a strategic-level decision, although based on limited information, since data on disruptions involving DOD-owned infrastructure were not collected for DOD’s annual reports. We believe that, if DOD takes actions to improve the comprehensiveness and accuracy of its utilities disruption data, the data could serve as a valuable tool in making additional well-informed utility resilience decisions. Collecting data on disruptions caused by DOD-owned infrastructure may give the department information on disruptions it has a greater ability to mitigate and DOD would have more complete information on which to make any future strategic decisions, such as the resiliency initiative discussed above. And, by collecting and reporting data on utility disruptions caused by DOD-owned infrastructure, the department would be giving Congress a more complete picture of disruptions on DOD installations. Finally, DOD stated that collecting data on disruptions caused by DOD-owned infrastructure would create an “onerous” reporting requirement that requires collection, review, and coordination across the department. However, DOD provided no evidence that collecting these additional data would be “onerous.” The installations we contacted were able to provide these data to us and DOD’s current data collection process already includes collection, review, and coordination across the department. In regard to our recommendations originally appearing fourth and fifth— regarding improvements in DOD’s process for collecting and reporting utilities disruption data and addressing challenges in implementing DOD Instruction 8510.01, regarding ICS—DOD concurred. However, DOD did not provide information on the timeline or specific actions it plans to take to implement our recommendations. DOD also requested that, in our recommendations, we remove references to the Marine Corps, because it is part of the Department of the Navy. In regard to the issues on which we made recommendations, the Marine Corps and Navy collaborate and take some shared actions, under the Department of the Navy. However, the Marine Corps and Navy also take actions that are specific to each military service. For example, the Marine Corps and Navy headquarters collect utilities disruption data from their installations through distinct processes and the two services have distinct plans for implementing DOD Instruction 8510.01. For this reason, we believe the recommendations are appropriately directed at the Marine Corps and Navy as separate military services. We are providing copies to the appropriate congressional committees; the Secretaries of Defense, Homeland Security, the Army, the Navy, and the Air Force, the Commandant of the Marine Corps, and the Assistant Secretary of Defense for Energy, Installations, and Environment. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine whether threats and hazards have caused utility disruptions on Department of Defense (DOD) installations—and if so—what impacts they have had, we reviewed various types of documents on utility disruptions and resulting impacts on installation operations. Examples of documents we reviewed include DOD and Department of Homeland Security assessments of utilities’ vulnerability to both hazards and threats, and DOD’s June 2013 and June 2014 Annual Energy Management Reports (Energy Reports). In addition, we interviewed or contacted officials from a nongeneralizable sample of 20 DOD installations from inside and outside the continental United States. To identify the installations for our sample, we took a number of steps. First, we reviewed military service data submitted to the Office of the Secretary of Defense (OSD) on utility disruptions that occurred on DOD installations from fiscal years 2012 to 2014 and lasted 8 hours or longer. According to our analysis of information provided by an OSD official, the military services account for about 87 percent of the utility disruptions reported to OSD for fiscal years 2012 to 2014. Because their installations account for a large majority of reported disruptions, we focus on the military services’ utility disruptions in this report. Because DOD’s data in its Energy Reports do not provide specific examples of disruptions and their impacts, we conducted independent research using publicly available information, such as news articles, the details of which we then asked officials from the military services to verify. We collected additional data on utility disruptions from 2005 to 2014 on installations inside and outside the continental United States, in order gather a large number of utility Next, we disruptions lasting 8 hours or longer, and their impacts.reviewed the military services’ data and the additional data we gathered, in order to select the 20 installations to include in our nongeneralizable sample. We selected installations based on whether the installations had more than one instance of utility disruption, or had a disruption of multiple types of utility service; and we chose installations from each military service. For installations inside the continental United States, we visited the sites, collected information in interviews, and gathered supporting documentation. For sites outside the continental United States, we collected written answers to the questions, along with supporting documentation. From the 20 installations, we gathered information on utility disruptions and their impacts; actions they had taken to mitigate such impacts; and implementation of selected pieces of DOD utility resilience guidance, discussed in more detail below. As discussed above, the installations in our sample provided information on utility disruptions from 2005 to 2014, lasting 8 hours or longer. In our sample of 20 installations, 18 installations reported a total of 150 disruptions lasting 8 hours or longer that occurred in fiscal years 2012, 2013, or 2014; 2 installations reported disruptions lasting 8 hours or longer that occurred Although the information we collected was not prior to fiscal year 2012.representative of all installations, the selection of these installations provided valuable insights for our review. In addition, we assessed the reliability of all computer-generated data provided by the installations in our sample by reviewing existing information about the data and the systems that produced the data and by interviewing agency officials knowledgeable about the data to determine the steps taken to ensure its completeness and accuracy. We determined that these data were sufficiently reliable for the purposes of presenting the number and certain characteristics of utility disruptions, as reported by officials from installations in our sample. However, as noted in our report, we determined those utilities disruption data reported by DOD in its June 2013 and June 2014 Energy Reports were not sufficiently reliable for the purpose of comprehensively or accurately presenting the total number, average duration, or cost of utility disruptions. Table 2 lists the installations we visited or contacted and their locations. To determine the extent to which DOD’s collection and reporting of information on utility disruptions is comprehensive and accurate, we reviewed the statutory reporting requirement for the Energy Reports, compared the military services’ data submissions for fiscal years 2012 through 2014 with information we collected from the installations we visited or contacted, and reviewed DOD’s process for collecting and reporting on this data. DOD is statutorily required to report on—among other things—the total number and location of utility outages on installations. To respond to this requirement, the military services provide information to OSD. We reviewed the military services’ submissions of as well utility disruption data to OSD for fiscal years 2012 through 2014,as the June 2013 and June 2014 Energy Reports in which DOD reported these data. We reviewed these two reports because, at the time of our review, DOD had not yet issued its June 2015 report. To identify the comprehensiveness of DOD’s reporting, we compared the military services’ data submissions to OSD with the independent research we conducted at 20 installations in our sample, as described above. When comparing the data from our sample with the military service data submitted to DOD, we included only the 150 disruptions that occurred on the sample’s installations from fiscal years 2012 through 2014. In addition, we reviewed DOD instructions on the data submissions that provide information to the military services on the scope and type of information the military services and their installations are supposed to submit to OSD. We then compared the services’ submissions to DOD instructions for installations that provided these data. Our comparison covered the 3 years the military services submitted data for DOD’s Energy Reports, fiscal years 2012 through 2014. Also, we reviewed documentation of OSD’s validation of the military services’ submissions. In addition, we met with officials at installations from our sample, the military services’ headquarters, and OSD to discuss how utilities data were collected, validated, and reported. We also discussed the data validation processes used by officials at both the military services’ headquarters and OSD. Further, to determine how DOD uses these utilities disruption data, we reviewed the June 2013 and June 2014 Energy Reports and met with officials at both the military services’ headquarters and OSD. Finally, we compared DOD’s processes for the collection, validation, reporting, and use of these data to several leading practices for the use and management of data and process improvement. Sources for these leading practices include the Standards for Internal Control in the Federal Government; our previous work that discusses improvement of infrastructure planning processes to better account for climate change impacts and improvement in the accuracy and completeness of data used to meet reporting requirements; and the Project Management Institute. To determine the extent to which DOD has taken actions and developed and implemented guidance to mitigate risks to operations at its installations in the event of utility disruption, we collected and reviewed DOD documents related to actions taken to mitigate risks, utility resilience guidance, and implementation efforts. We collected these documents from the 20 installations in our nongeneralizable sample and from the military service headquarters. To determine the extent to which DOD has taken actions to mitigate risks to operations at its installations in the event of utility disruptions, we reviewed documents such as those describing backup generators on installations and the refueling plans for those generators. We also reviewed documents describing installations’ plans for situations in which utility service is disrupted, to include emergency management plans. To determine DOD guidance related to utility resilience, we reviewed Defense Energy Program Policy Memorandum 92-1, DOD Instruction 2000.16, DOD Antiterrorism (AT) Standards (Oct. 2, 2006, incorporating change Dec. 8, 2006), DOD Instruction 4170.11, Installation Energy Management (Dec. 11, 2009), DOD Directive 3020.40, DOD Policy and Responsibilities for Critical Infrastructure (Jan. 14, 2010, incorporating change Sept. 21, 2012). In addition, we also reviewed documents related to the installations’ implementation steps, such as vulnerability analyses that cover all threats and hazards. In addition, we met with officials from our sample of installations, and from military service headquarters to discuss actions taken to mitigate risks of utility disruptions, identify guidance related to utility resilience, and to identify steps taken to implement the guidance. Furthermore, we collected and reviewed DOD documents and guidance related to cybersecurity of industrial control systems (ICS), which are often used to monitor and control utility infrastructure on DOD installations. Specifically, we reviewed DOD Instruction 8510.01, Risk Management Framework (RMF) for DOD Information Technology (IT) (Mar. 12, 2014). We reviewed documentation from OSD and the military services regarding cybersecurity of ICS, to include briefings and acquisition documents. We collected additional information from the Department of Homeland Security’s Industrial Control System Cyber Emergency Response Team, to include documents describing common vulnerabilities of ICS. Also, we met with officials from the military services’ and DOD’s Offices of the Chief Information Officer, officials from the military services’ headquarters offices, and OSD to discuss actions DOD had taken to begin implementation of DOD Instruction 8510.01 and challenges regarding implementation. Finally, we compared DOD’s implementation actions to the implementation goals in DOD Instruction 8510.01. We conducted this performance audit from June 2014 to July 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Previous GAO work has examined the federal government’s efforts to manage the physical security of the nation’s critical infrastructure and the vulnerabilities of the systems that support critical infrastructure, including the commercial electric grid, to cyber attacks. In October 2009, we reported that DOD’s most critical assets are vulnerable to electrical power disruptions, but that DOD lacks sufficient information to determine the full extent of its vulnerability. We recommended that DOD complete vulnerability assessments and develop guidelines for assessing the critical assets’ vulnerabilities to long-term electrical power disruptions, among other things. In June 2011, DOD implemented this recommendation by updating guidance for the execution of vulnerability assessments and issued a timeline to ensure the accomplishment of tasks and to provide feedback to components on the status of actions, including electrical power-related risks and vulnerabilities. GAO, Critical Infrastructure Protection: DHS Could Better Manage Security Surveys and Vulnerability Assessments, GAO-12-378 (Washington, D.C.: May 31, 2012). approach to delivering this information to improve timeliness. Regarding potable water, in 2005, we found that community water systems faced obstacles in implementing security measures, including insufficient financial resources to implement security enhancements and determining how best to use available funds given competing priorities such as non- security-related infrastructure upgrades. We did not make any recommendations in this report. In regard to wastewater, we reported in 2006 that these facilities have made security improvements but they have been limited, and that additional coordination among the Environmental Protection Agency and Department of Homeland Security regarding initiatives to enhance wastewater facility security is needed. We recommended that these two agencies, among others, identify how to reduce overlap and duplication and how access to timely security threat information could be improved. The Environmental Protection Agency implemented this recommendation by updating the Water Information Sharing and Analysis Center, which improved access to timely and authoritative security threat information. In January 2011, we also reported on the vulnerabilities of the systems that support critical infrastructure including the commercial electric grid to cyber attacks. Specifically, we identified several challenges to securing electricity systems and networks, including a lack of a coordinated approach to monitor industry compliance with voluntary standards, a focus by utilities on regulatory compliance instead of comprehensive security, and a lack of security features consistently built into systems. We made recommendations to the Federal Energy Regulatory Commission to address these challenges by periodically evaluating the extent to which utilities are following voluntary cybersecurity standards and developing strategies for addressing any gaps in compliance with these standards, among other things. While the Federal Energy Regulatory Commission agreed with these recommendations, they have not yet been implemented. Additionally, in December 2014 we reported that federal facilities’ industrial control systems (ICS) are vulnerable to cyber attacks.Specifically, we reported that these ICS—used to control things such as heating, ventilation, air conditioning, and electronic card readers—are increasingly being connected to the Internet and their vulnerability to potential cyber attacks is also increasing. We found that the Department of Homeland Security had not developed a strategy that defines the problem; roles and responsibilities; necessary funds; and a methodology for assessing the cyber risk. We recommended that the Department of Homeland Security develop a strategy with these components to address the cyber risk to these ICS. The department concurred with this recommendation and stated that it will develop a strategy. In addition to the contact named above, Laura Durland, Assistant Director; Ben Atwater; Hilary Benedict; Carolynn Cavanagh; Peter Haderlein; Karl Maschino; Steven Putansu; Jeanett Reid; Amie Steele; Christopher Turner; Erik Wilkins-McKee; Michael Willems; and Gregory Wilshusen made key contributions to this report. High-Risk Series: An Update. GAO-15-290. Washington, D.C: February 11, 2015. Federal Facility Cybersecurity: DHS and GSA Should Address Cyber Risk to Building and Access Control Systems. GAO-15-6. Washington, D.C: December 12, 2014. Critical Infrastructure Protection: DHS Action Needed to Enhance Integration and Coordination of Vulnerability Assessment Efforts. GAO-14-507. Washington, D.C.: September 15, 2014. Maritime Critical Infrastructure Protection: DHS Needs to Better Address Port Cybersecurity. GAO-14-459. Washington, D.C.: June 5, 2014. Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014. Information Security: Agencies Need to Improve Cyber Incident Response Practices. GAO-14-354. Washington, D.C.: April 30, 2014. Climate Change: Energy Infrastructure Risks and Adaptation Efforts. GAO-14-74. Washington, D.C.: January 31, 2014. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2013. Critical Infrastructure Protection: DHS Could Better Manage Security Surveys and Vulnerability Assessments. GAO-12-378. Washington, D.C.: May 31, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. April 24, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Defense Critical Infrastructure: Actions Needed to Improve the Identification and Management of Electrical Power Risks and Vulnerabilities to DOD Critical Assets. GAO-10-147. Washington, D.C.: October 23, 2009. Information Security: TVA Needs to Address Weaknesses in Control Systems Networks. GAO-08-526. Washington, D.C.: May 21, 2008.
Continuity of operations at DOD installations is vital to supporting the department's missions, and the disruption of utility services—such as electricity and potable water, among others—can threaten this support. House Report 113-446 included a provision that GAO review DOD's and the military services' actions to ensure mission capability in the event of disruptions to utility services. This report addresses (1) whether threats and hazards have caused utility disruptions on DOD installations and, if so, what impacts they have had; (2) the extent to which DOD's collection and reporting on utility disruptions is comprehensive and accurate; and (3) the extent to which DOD has taken actions and developed and implemented guidance to mitigate risks to operations at its installations in the event of utility disruption. For this review, GAO evaluated DOD guidance and policies, interviewed appropriate officials, and visited or contacted 20 installations within and outside the continental United States, selected based on criteria to include those experiencing multiple disruptions, disruptions of more than one type of utility, and each military service. Department of Defense (DOD) installations have experienced utility disruptions resulting in operational and fiscal impacts due to hazards such as mechanical failure and extreme weather. Threats, such as cyber attacks, also have the potential to cause disruptions. In its June 2014 Annual Energy Management Report (Energy Report) to Congress, DOD reported 180 utility disruptions lasting 8 hours or longer, with an average financial impact of about $220,000 per day, for fiscal year 2013. Installation officials provided specific examples to GAO, such as at Naval Weapons Station Earle, New Jersey, where in 2012, Hurricane Sandy's storm surge destroyed utility infrastructure, disrupting potable and wastewater service and resulting in almost $26 million in estimated repair costs. DOD officials also cited examples of physical and cyber threats, such as the “Stuxnet” computer virus that attacked the Iranian nuclear program in 2010 by destroying centrifuges, noting that similar threats could affect DOD installations. DOD's collection and reporting of utility disruption data is not comprehensive and contains inaccuracies, because not all types and instances of utility disruptions have been reported and there are inaccuracies in reporting of disruptions' duration and cost. Specifically, in the data call for the Energy Reports, officials stated that DOD installations are not reporting all disruptions that meet the DOD criteria of commercial utility service disruptions lasting 8 hours or longer. This is likely due, in part, to military service guidance that differs from instructions for DOD's data collection template. In its Energy Reports, DOD is also not including information on disruptions to DOD-owned utility infrastructure. There also were inaccuracies in the reported data. For instance, $4.63 million of the $7 million in costs reported by DOD in its June 2013 Energy Report were indirect costs, such as lost productivity, although DOD has directed that such costs not be reported. Officials responsible for compiling the Energy Report noted that utility disruption data constitutes a small part of the report and they have limited time to validate data. However, without collecting and reporting complete and accurate data, decision makers in DOD may be hindered in their ability to plan effectively for mitigating against utility disruptions and enhance utility resilience, and Congress may have limited oversight of the challenges these disruptions pose. Military services have taken actions to mitigate risks posed by utility disruptions and are generally taking steps in response to DOD guidance related to utility resilience. For example, installations have backup generators and have conducted vulnerability assessments of their utility systems. Also, DOD is in the planning stages of implementing new cybersecurity guidance, by March 2018, to protect its industrial control systems (ICS), which are computer-controlled systems that monitor or operate physical utility infrastructure. Each of the military services has working groups in place to plan for implementing this guidance. However, the services face three implementation challenges: inventorying their installations' ICS, ensuring personnel with expertise in both ICS and cybersecurity are trained and in place, and programming and identifying funding for implementation. For example, as of February 2015, none of the services had a complete inventory of ICS on their installations. Without overcoming these challenges, DOD's ICS may be vulnerable to cyber incidents that could degrade operations and negatively impact missions. GAO recommends that DOD work with the services to clarify utility disruption reporting guidance, improve data validation steps, and address challenges to addressing cybersecurity ICS guidance. DOD concurred or partially concurred with all but one recommendation and disagreed with some of GAO's analysis. GAO believes the recommendations and analysis are valid as discussed in the report.
The nation’s veteran job seekers receive employment and training services from programs overseen by two agencies within Labor—the Veterans’ Employment and Training Service and the Employment and Training Administration. General employment services fall under the purview of ETA, which administers the Wagner-Peyser-funded Employment Service program, providing a national system of public employment services to all individuals seeking employment—including veterans. Thus, those veterans considered job ready and not in need of intensive services may be served by Employment Service staff and receive such services as assessment, counseling, job readiness evaluation, and placement. ETA carries out its Employment Service program through workforce agencies in each state. In fiscal year 2006, the Employment Service program provided a total of about $716 million to states. While ETA administers programs that serve the general population, including veterans, VETS administers the DVOP and LVER programs, which focus exclusively on serving veterans, often providing more intensive services than the Employment Service does. Like ETA, VETS carries out its responsibilities through a nationwide network that includes representation in each of Labor’s six regions and staff in each state. The Office of the Assistant Secretary for VETS administers the agency’s activities through regional administrators and state directors. The DVOP specialists and LVER staff, whose positions are funded by VETS, are part of states’ public employment services. In fiscal year 2006, the DVOP and LVER programs were funded at about $155 million. In the most recent program year—program year 2005, which spanned July 1, 2005, to June 30, 2006—the Employment Service, together with the DVOP and LVER programs, reported serving about 1.32 million veterans nationwide, of whom over 715,000 were served by DVOP specialists and LVER staff. The Employment Service and the DVOP and LVER programs are mandatory partners in the one-stop system under WIA—where services are provided by a range of employment and training programs in a single location. Veterans, along with other eligible job seekers, may receive services from other mandatory one-stop partners, such as WIA-funded training or Trade Adjustment Assistance. Additionally, job seekers, including veterans, may use the one-stop centers’ computers and other resources without staff assistance, and in many places may access one- stop services online from home. Department of Veterans Affairs (VA) programs are not mandatory partners in the one-stop system, but do participate at some locations. In 2002, the Jobs for Veterans Act amended Title 38 of the U.S. Code-- which governs the DVOP and LVER programs—and by doing so, introduced an array of reforms to the way employment and training services are provided to veterans. JVA sought to address concerns that the programs were overly prescriptive by providing states with enhanced flexibility to determine the best way to serve veteran job seekers. Among its reforms, JVA redefined the DVOP specialist and LVER staff roles but gave states flexibility in deciding their duties; established a single state grant and a new funding formula that allowed states to determine the mix of DVOP specialists and LVER staff; required a comprehensive performance accountability system consistent with WIA performance measures; required that veterans receive priority over other job seekers in all Labor job training programs, not just the Employment Service; and required that VETS include information in its annual report to Congress on employment services to veterans throughout the one-stop system. JVA identified broad roles and responsibilities of DVOP specialists and LVER staff. For example, DVOP specialists are to focus on providing intensive services to eligible veterans, giving priority to disabled veterans and those with other barriers to employment. LVER staff are to focus on conducting outreach to employers to assist veterans in gaining employment, as well as facilitating employment, training, and placement services given to veterans. State workforce agencies receive a single veterans’ program grant to fund both programs; the amount each state receives is determined in part by the size of the veteran population within each state. State agencies then decide how to distribute the amount they receive between the two programs. Table 1 lists selected responsibilities of DVOP specialists and LVER staff as set forth in Labor guidance. JVA also stipulated that veteran job seekers must receive priority over other job seekers in any job training program administered by Labor. Labor’s guidance requires states to explain how veterans will be given priority and how veterans’ services will be provided through the state’s one-stop system. For programs that target particular populations, such as seniors or low-income individuals, veterans’ priority is applied after any other mandatory eligibility provisions are met. Like other Labor employment and training programs, the DVOP and LVER programs have experienced changes both in the way outcomes are tracked and in the measures used to assess performance. Specifically, in 1998, WIA required that states use automated unemployment insurance wage records to track employment-related outcomes. Formerly, to obtain data on outcomes, states relied on a manual follow-up process using administrative records or contacts with job seekers. To conform to WIA, VETS moved from such a manual follow-up system to the new automated process in 2002. The measures that Labor uses to assess performance in the DVOP and LVER programs have also changed over time, gradually reflecting more emphasis on outcome-based measures. Before passage of the JVA in 2002, for example, some of the measures used for the DVOP and LVER programs focused more on services received—such as the number of veterans in training or receiving counseling—than on outcomes achieved. In 2002, JVA required that Labor develop a comprehensive performance accountability system and required that the new system measure performance in a way that is consistent with WIA. In 2003, VETS adopted performance measures for the DVOP and LVER programs based on those then used in WIA. In 2005, in response to an Office of Management and Budget (OMB) initiative, Labor began requiring states to implement common performance measures for its employment and training programs, including the DVOP and LVER programs, the Employment Service, and WIA. OMB established a set of common measures to be applied to most federally funded job training programs that share similar goals. Labor further defined the common measures for all of its Employment and Training Administration programs, applying three measures to each of its adult programs (see table 2). In applying the common measures to its programs, VETS also developed additional measures to emphasize outcomes for disabled veterans in the DVOP program and outcomes for recently separated veterans in the LVER program. Labor collects performance data for the DVOP and LVER programs on a quarterly basis from state workforce agencies. The state agencies use report formats developed by Labor to provide detailed tabulations of aggregate information on the characteristics of veteran participants, services, and outcomes for the two programs, including data showing states’ performance using the common measures. The state agencies provide this information to Labor in three separate reports: one for the DVOP program, one for the LVER program, and one representing an unduplicated count for both programs. Furthermore, Labor collects additional information on veterans who participate in other Labor programs. For example, ETA collects performance data for the Employment Service on all participants on a quarterly basis from state workforce agencies, and these reports break out services and outcomes for veteran participants. States submit their quarterly reports for the Employment Service and the DVOP and LVER programs through the same Labor reporting system. Information on the services a program has provided and the outcomes obtained by program participants are necessary to assess program impacts. However, this information is not sufficient to measure program impacts—the outcomes may be due to other external factors such as local labor market conditions. While impact evaluations allow one to isolate a program’s effect on the outcomes of participants, there are several approaches to conducting such evaluations. The experimental method is often considered the most rigorous method for conducting impact evaluations. In the experimental method, participants are randomly assigned to two groups—one that receives a program service (or treatment) and one that does not (control group). The resulting outcome data on both groups are compared and the difference in outcomes between the groups is taken to demonstrate the programs impact. However, it is not always feasible to use the experimental method for assessing program impacts. Alternatively, researchers may use a quasi- experimental approach in which program participation is not randomly assigned. One approach, often called a comparison group study, compares outcome data for individuals who participated in the program with data on others who did not participate for various reasons. In a comparison group study, it is important to find ways to minimize, or statistically control for, any differences between the two groups. According to OMB, well-matched comparison group studies, under certain circumstances, can approach the rigor of the experimental method, and it recommends considering this method if random assignment is not feasible or appropriate. Under WIA, Labor was required to conduct at least one impact evaluation of program services by 2005. In a 2004 report, we found that Labor had not yet begun such an evaluation, and recommended that the agency comply with this statutory requirement and help federal, state, and local policy makers understand what services are most effective for improving employment-related outcomes. The DVOP and LVER programs’ performance information is weakened by several factors, including implementation challenges and frequent changes to performance reporting requirements. In July 2005, Labor implemented new performance measures, which provide information on some outcomes for veterans. However, not all performance measures have been fully implemented. Additionally, neither the performance measures nor the data reported to Labor reflect the full range of services that DVOP specialists and LVER staff provide to veteran job seekers. Furthermore, it is difficult to assess outcomes over time or across states because of frequent changes in states’ reporting requirements that prevent establishing reliable trend data. In July 2005, the DVOP and LVER programs adopted the Office of Management and Budget’s common measures, along with other employment programs, including WIA and the Employment Service. Specifically, states implemented measures that track whether veterans obtain and keep jobs after receiving services through these programs, but they have not yet implemented a measure to track veterans’ earnings. States are held accountable for four separate measures in each program that focus on outcomes attained by veterans (see table 3). For the DVOP program, states are held accountable for employment and retention for all veterans served by the program, as well as for disabled veterans. For the LVER program, states are assessed on employment and retention for all veterans, as well as for recently separated veterans. Currently, all states collect and report data to Labor for calculating performance attainment and negotiating state goals for these eight measures. However, states are not yet held accountable for an additional common measure—veterans’ average earnings—in either the DVOP or the LVER programs. Other employment and training programs, such as WIA and the Employment Service, include an average earnings measure for which states are accountable. For the DVOP and LVER programs, however, calculating the average earnings was not as straightforward as Labor had anticipated. A VETS official told us that the agency will calculate baseline data for average earnings during the current program year, but Labor will not establish goals and states will not be held accountable for their performance on this measure until the following year—program year 2007—at the earliest. Furthermore, Labor has not adopted a system to give more weight to successful outcomes for veterans who have substantial barriers to employment, such as a disability. JVA required Labor to weight performance measures to provide special consideration to veterans requiring intensive services, as well as disabled veterans. Such a weighting system would compensate for the fact that veterans with barriers to employment may need more assistance than others in finding jobs. It would also provide an incentive for program staff to help veterans with severe barriers to employment. For example, if a veteran has a disability and requires intensive case management services, his or her successful outcomes would have a greater effect on a state’s overall performance than those of other veterans with fewer barriers. Following JVA’s enactment, Labor formed a work group to develop a weighting system for the DVOP and LVER performance measures. On the basis of the group’s work, the agency issued guidance to introduce the weighted measures to states in June 2003, with the expectation of implementing them soon after. However, after further review, a Labor official told us the agency did not implement the weights in order to give states time to fully implement other reporting changes. At this time, it is not clear whether Labor will implement this system in the future. Although DVOP specialists and LVER staff perform similar duties for all types of veterans in most states, the current performance measures hold the two programs accountable for different groups of veterans. JVA and Labor’s guidance outline the key responsibilities and target populations for DVOP specialists and LVER staff, but also allow for some flexibility in their roles and responsibilities. Both DVOP and LVER staff are expected to serve the general veteran population, but DVOP specialists are also expected to target their services toward veterans who have greater barriers to employment and need intensive case management, including disabled veterans. JVA specifies that LVER staff focus on conducting outreach to employers and assisting all veteran job seekers. In addition, Labor has recently added the expectation that LVER staff focus their responsibilities on assisting recently separated veterans. As a result of these expectations, Labor separately holds DVOP specialists accountable for the outcomes achieved by the disabled veterans they serve, and LVER staff for the outcomes of the recently separated veterans they serve. In practice, however, both programs’ staff serve similar veteran populations. In program year 2005, for example, 14 percent of veterans served by the DVOP program were disabled and 21 percent were recently separated. For the LVER program, 10 percent of veterans served were disabled and 19 percent were recently separated (see table 4). States acknowledged this similarity in our survey as well. Over a third of states responded that DVOP and LVER staff are equally likely to serve disabled veterans, while about half of states responded that the two programs’ staff are equally likely to serve recently separated veterans. In addition to finding similarity in populations served, we also found some similarity in activities carried out by DVOP and LVER staff. States reported that the three activities on which DVOP specialists spend the most time include providing intensive case management services, conducting an initial assessment or interview, and assisting with job search activities. The top three activities LVER staff perform include conducting outreach to employers, assisting with job search activities, and conducting an initial assessment or interview. This division of duties appears to reflect the different focuses of the two programs, as well as the flexibility under JVA for states to decide on staff duties. However, almost 85 percent of states responded that DVOP specialists conduct outreach to employers, a focus of the LVER program. Additionally, almost 60 percent of states responded that LVER staff provide intensive services, a primary focus of the DVOP program. In our site visits, we found that this similarity in staff roles and target populations exists in part because some one-stop centers have only a single DVOP specialist or LVER staff on duty at any given time. In these particular one-stop centers, the same employee is responsible for serving all groups of veterans and carrying out job roles for both programs. Even in centers with more than one staff person, veterans tend to be served by whichever staff person is available at that time. Program staff in several centers told us that recently separated veterans were not specifically directed to LVER staff for services, nor were disabled veterans directed to DVOP staff. This sharing of duties may be due, in part, to changes in staffing levels. More than half of states reported a decrease in the number of full-time DVOP specialists or LVER staff over the last 2 years, and most attributed this decline to the size of their state grant for the programs. Nevertheless, this similarity in roles and populations served causes the current performance measures to present an incomplete view of outcomes for disabled and recently separated veterans in the DVOP and LVER programs. The large numbers of disabled veterans served by the LVER program and recently separated veterans served by the DVOP program are not included in the set of measures that focus on the outcomes of those populations (see fig. 1). Beyond the measures for disabled and recently separated veterans, having separate measures for the DVOP and LVER programs obscures the overall picture of outcomes to veterans, given the similarity between many of the program activities and the reality of how the programs operate. According to our survey, almost half the states would like Labor to consolidate the performance measures for the DVOP and LVER programs. While the performance measures present an incomplete view of the outcomes for veterans, the data that states are required to report to Labor do not reflect the full range of staff services. Labor requires states to report a wide range of data for the DVOP and LVER programs, including information on veteran characteristics—such as age and disability status— and staff services provided—such as intensive services and referrals to other programs. However, Labor does not require data on employer outreach activities, despite JVA’s designation of employer outreach as a primary job responsibility of LVER staff. Consequently, Labor and states cannot formally monitor the extent to which staff perform this activity. Some states, however, collect these data for their own use. According to our survey, almost half of states currently collect employer-related information for the DVOP and LVER programs, and over 75 percent of states reported that it would be helpful to collect these data. In addition, even though the data reported to Labor generally reflect services and outcomes for veterans, these data are aggregate tallies and do not show services provided to individual veterans. For example, each state’s quarterly reports reflect the sum of all services provided and do not show the number of services provided per veteran or per staff person. The current data are useful to provide an overall picture of the programs’ volume and operations. However, these data provide little information about services received by individual veterans or delivered by particular veteran staff. In recent years, reporting requirements for the DVOP and LVER programs have undergone several significant changes. These changes have moved the performance accountability system closer to those of other employment and training programs. At the same time, the changes have resulted in a lack of reliable trend data. In July 2002, the DVOP and LVER programs changed from using administrative follow-up to determine veterans’ employment outcomes to obtaining information from Unemployment Insurance (UI) wage records. In doing so, Labor changed its method of calculating outcomes for veterans in the DVOP and LVER programs. Then, in July 2005, Labor applied the common measures to these two programs, refining and standardizing the application of UI wage records to determine outcomes. Under the old system, Labor calculated entered employment and employment retention rates based on the number of veterans who participated in the programs. However, under the new system, Labor calculated these rates based on how many veterans terminate services and exit the programs. Although these changes have standardized the performance measures across programs, they have also prevented Labor and states from developing consistent, comparable data over the past 5 years. As a result, Labor does not have reliable historic data for either program. Figure 2 illustrates the various changes to the DVOP and LVER programs’ performance reporting requirements. Furthermore, the instability in data collection and reporting has left Labor unable to establish a national veterans’ entered employment standard, as required by JVA. Labor anticipates that it will need at least 3 years of stable data to establish the national standard. Once it is established, all states will be held accountable to the same minimum goal for veterans’ entered employment. However, it is unclear when Labor will have sufficient data to establish this standard because states continue to experience difficulty adjusting to the numerous changes. According to our survey, over 70 percent of states reported that frequent changes to performance reporting requirements have been either a great or very great challenge. The data also vary somewhat state by state. For example, the application of wage records to calculate veteran outcomes across state lines is no longer consistent across states. The Wage Record Interchange System (WRIS) allows states to share UI wage records and account for job seekers who participate in one state’s employment programs but get jobs in another state. In recent years, all states but one participated in WRIS, which was operated by the nonprofit National Association of State Workforce Agencies. In July 2006, Labor assumed responsibility for administering WRIS. However, many states have withdrawn, in part because of a perceived conflict of interest between ETA’s role in enforcing federal law and the states’ role in protecting the confidentiality of their data. As of March 2007, only 30 states were participating in the program, and it is unknown if and when the other states will enter the data-sharing agreement. As a result, DVOP and LVER performance information in almost half the states will not include employment outcomes for veterans who found jobs outside the states in which they received services. In addition, other reasons contribute to data variation by state. Labor allows states flexibility in choosing data collection software, which has resulted in some states adapting more quickly than others to the recent changes, depending on their software capabilities. Several Labor officials told us that because of differences in software capabilities, some states’ data may be more reliable than others’. Labor’s data on veteran job seekers paint an unclear picture of their use of employment and training services in the one-stop system, despite the shared use of common performance measures across programs. Although many veterans use employment services other than those provided by the DVOP and LVER programs, key employment programs vary in how well their data on veteran participants are integrated or shared with other programs. As a result, many states may not know how many veterans use one-stop services. In addition, statutory differences in the way veterans are defined for purposes of program eligibility make it difficult to standardize data across employment programs. Moreover, Labor has no means of assessing whether priority of service for veterans has been implemented in various employment programs. Many veteran job seekers receive employment services from the DVOP and LVER programs. However, some veterans—often the more job- ready—only use one-stop services aimed at the general population, such as the Employment Service and WIA programs. In addition, some veterans use services focused on other subsets of job seekers—such as TAA (see fig. 3). As a result, performance information on many veterans is collected and reported elsewhere in the one-stop system. In fact, 20 states reported that about half or fewer of veteran job seekers who access employment programs receive services from a DVOP specialist or LVER staff, according to our survey (see fig. 4). In addition, some veterans obtain services from more than one employment program in the one-stop system, all of which use the common measures to assess their performance. Performance data on veteran job seekers are well integrated or shared across some key employment and training programs, but not others, despite the mutual use of common measures. As a result, many states may not know how many veterans they serve through the one-stop system. Data on veterans who access the Employment Service are completely integrated with data from the DVOP and LVER programs—they share the same reporting system, and DVOP and LVER data are a subset of Employment Service data. According to our survey, veteran job seekers in most states receive initial assistance from the Employment Service when they access the one-stop system. If they are subsequently referred to the DVOP and LVER programs, all of their information is housed in the same system and an unduplicated count of veterans served between these programs can be obtained. In addition, states are held accountable for meeting separate goals in the Employment Service for veterans and disabled veterans (see app. III). Labor considers these measures to reflect veterans’ outcomes for the entire one-stop system, as they constitute outcomes for all veterans who access the Employment Service, DVOP, and LVER programs. Furthermore, they are the best approximation of a total count of veterans who access the one-stop system that the current data will allow. On the other hand, data on veterans served by other one-stop programs are not well integrated. States report data to Labor on WIA participants who exit the programs, including veterans, using the Workforce Investment Act Standardized Record Data (WIASRD) system. Although WIASRD contains sufficient information to produce separate veteran outcome data for WIA programs, states are not required to produce separate veteran reports and are not accountable for meeting veteran goals in those programs. In addition, fewer than half the states reported that they routinely match WIA and Employment Service records to attain an unduplicated count of veterans served by those programs. Consequently, veterans who access two different employment services may be counted twice in some cases. Data for TAA participants are reported to Labor in yet another data system, which does allow states to report on the veteran status of participants, but Labor officials told us the agency does not currently use veteran outcomes from that program for any purpose. VETS does not include the veteran outcome data from WIA or TAA in its annual report to Congress, and Labor officials told us they are exploring ways to better use the data. In addition, data are not always collected on job seekers who use Employment Service or WIA resources without assistance from program staff. These self-assisted job seekers—including veterans—access services such as labor market or career information either in one-stop centers or on home computers, but do not receive active assistance from program staff. Historically, some states have collected information on these job seekers, while others have not. In our survey, 73 percent of states reported that they capture information on all veterans who receive self-assisted services through the Employment Service, while 82 percent of states reported doing so for all veterans who receive self-assisted WIA services. Labor has encouraged—but not mandated—states to collect information on this group of job seekers, but agency officials acknowledged that states continue to vary in how they report such data. Labor officials have expressed concern that requiring veterans who receive self-assisted services to register for the programs might discourage some of them from pursuing the services they need. Labor and some state officials we surveyed reported that statutory differences in the definitions of veterans for various employment programs make it difficult to standardize data across programs. For the purposes of the DVOP and LVER programs, an eligible veteran is statutorily defined as an individual who served on active duty for more than 180 days. Labor also uses this definition for the Employment Service. WIA, on the other hand, does not specify a length of time in service for a person to be considered a veteran. Moreover, to qualify as a recently separated veteran in the DVOP and LVER programs, a veteran must have left active duty in the last 3 years. By contrast, WIA defines recently separated as having left active duty in the last 4 years (see table 5). These inconsistent definitions have been difficult for Labor and states to reconcile with the concept of seamless service delivery and have caused some confusion for states as they implement priority of service throughout the one-stop system. While JVA requires that veterans receive priority over other job seekers in Labor-funded employment and training programs, it does not define a veteran for purposes of the priority requirement. Labor has interpreted JVA’s provisions to mean that while veterans are to receive preference in the programs after any other statutory eligibility requirements are met, each program must use its own statutory definition of a veteran in applying that preference. Labor officials told us that one state applied for a waiver in 2006 to use a single definition of veterans for all of its employment and training programs, but Labor’s Solicitor’s Office orally denied the request. In our survey, approximately half of all states reported that the conflicting veteran definitions in various employment programs complicate data entry, referrals to other programs, and the implementation of priority of service. In addition, about a third of the states claimed that the definitions created gaps in services for veteran clients as they moved among employment programs (see fig. 5). For example, if a veteran receives services from WIA and is subsequently referred to the DVOP program but is found ineligible, he or she may become discouraged and stop seeking services altogether. Almost half of states shared their concerns about different definitions by providing additional comments in our survey, many of which cited the difficulty of providing priority of service under these circumstances. For example, one state responded that different definitions often lead to inappropriate referrals, resulting in poor customer service and frustration for program participants and service providers. Other states focused on the burden that competing definitions placed on data collection and reporting. For example, one state responded that the issue has made it difficult to integrate the state’s Employment Service and WIA data systems, because the different definitions could lead to invalidating the veteran numbers on reports for those programs. Another state cited the difficulty in assessing how many veterans were served by the state, highlighting the complexity of producing an unduplicated count of veterans served by different programs that do not share a single definition. States also cited challenges in dealing with other agencies that are not mandated partners in the one-stop system. For example, two states mentioned that some staff of other agencies’ programs may hesitate to refer participants to the DVOP and LVER programs because they are unsure about participant eligibility. An expert on veterans’ issues in the states concurred that the different eligibility criteria for veterans has been a problem for states and told us that a common veteran definition for employment and training programs would be an improvement. Despite JVA’s mandate, Labor has not produced information on the extent to which veterans receive priority of service in all qualified employment and training programs. Specifically, JVA required Labor to evaluate and report on whether veterans are receiving priority of service and are fully served by its employment programs, as well as whether the representation of veterans in such programs is in proportion to their participation in the labor force. In its fiscal year 2005 report, Labor stated that the participation rate for veterans in its adult programs was approximately 8.4 percent—slightly higher than veterans’ participation rates in the U.S. workforce. In addition, the agency reported that outcomes for veterans served in these programs closely mirrored those of all job seekers in the programs. However, Labor has no method of gauging how—and how consistently—priority of service is actually applied. Labor officials told us that the highly devolved workforce development system makes it very difficult to evaluate priority of service, because different programs have multiple access points and diverse eligibility criteria that prevent Labor from applying a simple measurement technique to each. States reported that implementing priority of service has been challenging, as has holding one-stop partner programs accountable for serving veterans. To supplement federal guidance on this issue, at least one state has developed its own guidance for implementing and measuring priority of service. Some Regional Directors of VETS told us they encouraged the use of that state’s guidance as a model for assessing priority of service for states in their own regions. We do not know when Labor will develop further guidance on the issue. However, in December 2006, Congress passed the Veterans Benefits, Health Care, and Information Technology Act of 2006, which included a requirement that Labor release regulations on implementing priority of service within 2 years. In addition, the agency has begun planning a study of priority of service in response to our prior report. According to Labor officials, the study will combine a survey of participants with a process evaluation and an analysis of outcomes. Labor does not yet know when the study will get under way. Labor has taken some steps to improve the quality of performance data and better understand veterans’ services and outcomes, but the overall impact of employment services for veterans is unknown. Labor has developed some processes to enhance data quality. For example, Labor’s ETA requires states to validate some data in key programs. Furthermore, Labor plans to implement an integrated data-reporting system that would allow Labor and states to track individual veterans’ progress through different programs in the one-stop system. Additionally, the new system would expand data collection by, for example, collecting more data on services to employers. However, states have raised concerns about the challenge of meeting the system’s planned implementation date, and the timeline for implementation remains unclear. Furthermore, while performance information helps assess whether individuals are achieving their intended outcomes—such as obtaining employment—it cannot measure whether the outcomes are a direct result of program participation, rather than external factors. To measure the effects of a program, it is necessary to conduct an impact evaluation that would seek to assess whether the program itself led to participant outcomes. Labor has sponsored research on services to veterans. However, it has not conducted an impact evaluation, as required under WIA, to assess the effectiveness of one-stop services. Such a study should include impacts for key participant groups, including veterans. We recommended in 2004 that Labor take steps to conduct such an evaluation, but there has been no action to date. Labor has taken some steps to improve the quality of performance data and enhance the understanding of veterans’ services and outcomes. To address data quality concerns, ETA has developed processes requiring states to validate certain data reported for participants in WIA and Wagner-Peyser-funded Employment Service programs. However, while these programs serve veterans, participant records are randomly selected in both programs from the total participant population and, therefore, may not include the records of veteran participants. Both the WIA data validation process, developed in 2004, and the Employment Service process, developed in 2003, involve two types of data validation, although the WIA process is more intensive, according to Labor officials. Both processes involve (1) data element validation—comparing randomly sampled participant records to source files, and (2) report validation—assessing whether states’ software accurately calculated performance outcomes. While element validation in WIA is conducted on- site with hardcopy source documentation, the Employment Service data validation process is performed centrally and electronically, because Employment Service records are generally electronic. The Employment Service element validation process checks for duplicate or invalid entries in source files by, for example, checking for inconsistencies among various veteran-related fields, such as veteran status and disabled veteran. However, the Employment Service element validation process cannot check the underlying accuracy of the data, because there is no hard copy documentation to prove whether a participant is in fact a veteran. Labor officials told us that the Employment Service data validation process has been helpful in raising awareness among states about the importance of data quality and that some states have come to see it as a useful tool. Additionally, states responding to our survey generally agreed that it has been effective—38 states, or about 75 percent, rated the Employment Service data validation process as effective in ensuring the accuracy of veteran job seekers’ information. For example, according to one respondent, review of the data validation results is used as a management tool, to highlight successes and to alert staff to weaknesses. Nevertheless, some states have expressed concerns about the data validation processes. Concerns about the process were also raised by state officials in all 3 of the states we visited. For example, officials in 2 of the 3 states noted that they had experienced difficulties adjusting to frequent changes in software before the results were due to Labor. On our survey, 2 states said that the sample size was too small to be meaningful, and 4 states expressed concerns about the fact that the process does not verify the accuracy of the data in source files. These concerns are similar to those we identified in a previous report that addressed the WIA process. Additionally, Labor has taken steps to address data quality as a part of its routine monitoring and technical assistance. Specifically, beginning in 2004, ETA regional staff have incorporated a data quality component into compliance visits to state offices, which are generally conducted once or twice a year, according to Labor officials. Data validation is just one component of these compliance visits, which typically do not focus on veterans’ data as a separate issue. To support this effort, Labor officials told us that ETA has amended its monitoring guide for these visits to include a section on data validation. According to Labor officials, these visits have been useful in identifying problems and corrective actions. Moreover, ETA and VETS have recently collaborated on a few of these compliance visits. Labor officials said they believed this joint monitoring was beneficial, and expect those efforts to be a model for future joint visits. There are several other forms of management reviews that generally focus on services to veterans but also offer a chance to review data. For example, VETS regional and state-based staff conduct site visits as part of their routine monitoring, which focus primarily on services to veterans but which can include reviewing performance information as well. Additionally, VETS has required a series of annual assessments—of the program for each state overall, and self-assessments by DVOP specialists, LVER staff, and one-stop managers—that address data issues to a limited extent. State directors use performance data to substantiate services described in the self-assessment. For example, according to one official we spoke with, to confirm a LVER staff’s claim of travel to several job fairs, the director can consult the one-stop’s travel log to substantiate whether the LVER staff actually made the trips. Beyond the steps Labor has taken, state workforce agencies also perform functions that affect performance data on services to veterans. Most states responding to our survey reported that they have taken certain steps to ensure the accuracy and reliability of data for the Employment Service, DVOP, and LVER programs, such as having their systems perform automated checks for inconsistencies in data or for duplicate veteran files (see fig. 6). Since 2004, Labor has been planning to implement an integrated data reporting system that could greatly enhance the understanding of veterans’ services and outcomes. In 2004, Labor first proposed a single, streamlined reporting system, known as the ETA Management Information and Longitudinal Evaluation system (EMILE) that would have replaced reporting systems for several Labor programs. Labor substantially modified this system’s design in response to concerns raised by state and local agencies about the burden and cost of the new system, as well as the challenge of meeting the implementation deadline. The modified system, now called the Workforce Investment Streamlined Performance Reporting System (WISPR), was planned with a July 2007 implementation date. WISPR has been designed to both integrate and expand data reporting. If implemented, the system would integrate data reporting by using standardized reporting requirements across the Employment Service, DVOP and LVER, WIA, and TAA programs, and ultimately replace their preexisting reporting systems with a single reporting structure. Additionally, it would rely on a standardized set of data elements and quarterly reports to provide data on participant characteristics and services provided, as well as performance outcomes based on the common measures. Its integrated design would, for the first time, allow Labor and states to track individual veterans’ progress through the one-stop system. In addition, the system would expand data collection and reporting in two key areas: the services that LVER staff provide to employers, a key aspect of the LVER role on which Labor currently collects no data, and estimates of the population of veterans who access the one-stop system but ultimately receive limited or no services from one-stop staff. As with EMILE, however, concerns have been raised about challenges in implementing the new system, and at present, the timeline for WISPR’s implementation remains unclear. Some of the comments received by OMB during the official comment period noted the challenge of a July 2007 implementation date, according to a Labor official. While states will have a 2-year period to consolidate reporting on the full range of programs, they are expected to begin collecting and reporting data in the new format immediately. As of December 2006, 39 entities, including state workforce agencies, local agencies and unions, had submitted comments reflecting their concerns about WISPR to the Office of Management and Budget (OMB). Of the 20 states that submitted comments, 14 noted that a July 2007 implementation date would represent a challenge. For example, some of them expressed the view that Labor had underestimated the time states would need to revise policy, reprogram systems, and retrain staff. In addition, some states expressed concerns about their ability to provide data on services to employers. Moreover, two states expressed the concern that meeting Labor's planned implementation date would have adverse consequences, such as compromised data quality or cost overruns. OMB’s official review will address the time needed to build the reporting system’s technical infrastructure, and will play a key role in deciding the system’s final implementation schedule, according to a Labor official. States and local areas will need enough time to fully meet the requirements of this expanded data collection. Although Labor has improved its outcome data on job seekers who participate in its programs, these data alone cannot measure whether outcomes are a direct result of program participation, rather than external factors. For example, local labor market conditions may affect an individual’s ability to find a job as much or more than participation in an employment and training program. To measure the effects of a program, it is necessary to conduct an impact evaluation that would seek to assess whether the program itself led to participant outcomes. Labor has not conducted an impact evaluation of one-stop services, including those to veterans. However, the department did sponsor a study, issued in 2003, that examined the relationship between services provided to certain groups of veterans and employment and earnings outcomes. This study employed a number of data sources and statistical techniques to learn more about how veterans were using one-stop services. However, while this study provided some useful information, it could not determine that these services caused positive outcomes for veteran job seekers. In addition, the study relied on data from 8 states and its findings could not be generalized to the national population of veteran job seekers. Since the full implementation of WIA in 2000—in which the one-stop system became the required means to provide employment and training services, including those to veterans—Labor has not made evaluating the impact of those services a research priority. While WIA required one such evaluation by 2005, Labor has declined to fund one in prior budgets. In a 2004 report, we recommended that Labor comply with the requirements of WIA and conduct an impact evaluation of WIA services to better understand what services are most effective for improving employment- related outcomes. In response to our report, Labor cited the need for program stability and proposed delaying an impact evaluation of WIA until any changes that might be included in reauthorization legislation had been implemented. While efforts to reauthorize WIA began in 2003, they have stalled and it is not clear at this time when they will be complete. Furthermore, OMB has also found Labor’s evaluations of WIA services to be lacking. In response, in its 2008 budget proposal, Labor identified an assessment of WIA’s impact on employment, retention, and earnings outcomes for participants as an effort the agency would begin. According to Labor officials, the agency has not yet begun to design the study. Such a study should include impacts for key participant groups, including veterans. To do so would require a sufficient sample of veterans to allow such analysis. At a time when the nation’s attention is focused on those who have served their country, it is vital that Congress and the Administration are able to make informed decisions about programs that help veterans find and keep jobs in the civilian labor market. Frequent changes in Labor’s performance accountability system have hampered Labor’s ability to produce consistent and meaningful performance information on veteran job seekers. States and local areas have had difficulty implementing the constant changes to performance information, which introduce error and make it difficult to identify trends that would give Congress a better idea of the programs’ achievements. While the anticipated transition to a new reporting system represents a promising advance in Labor’s ability to track the outcomes of veterans in the one-stop system, states will need time to effectively implement the changes to avoid compromising the potential benefits— such as improved data quality—of the system. Furthermore, the current separate performance measures for the DVOP and LVER programs do not account for the considerable similarity in veteran populations served by DVOP specialists and LVER staff, and thus do not provide an accurate picture of outcomes for veterans served by these two programs. Using the existing measures, Labor also cannot ensure that performance outcomes give more weight to services for veterans with greater barriers to employment. In addition, different veteran definitions in other programs could make it difficult to analyze services to veterans throughout the one-stop system. Further, Labor cannot provide assurance that veterans are appropriately given service priority by programs in the one-stop system, or that services to veterans are truly effective. The federal government spends about $155 million each year on the DVOP and LVER programs alone, not counting the amounts spent on veterans who use other one-stop programs, but there is no information on whether these programs have an impact in helping this important population. Establishing a means to gauge the programs’ impact would require a considerable investment of time and money, but would contribute greatly to the understanding of whether current employment and training services are meeting veterans’ needs. Furthermore, we continue to urge Labor to meet WIA requirements and our 2004 recommendation to conduct an impact evaluation of one-stop services. To provide a better picture of services and outcomes for veteran job seekers, improve program reporting, and facilitate priority of service, we recommend that the Secretary of Labor ensure that states are given adequate direction and sufficient time to implement ETA’s planned integrated data reporting system and make necessary changes; consolidate all performance measures for the DVOP and LVER programs, including those for disabled and recently separated veterans; comply with JVA’s requirement to implement a weighting system for the DVOP and LVER performance measures that takes into account the difficulty of serving veterans with particular barriers to employment; develop legislative proposals for appropriate changes to the definitions of veterans across employment and training programs to ensure consistency; and ensure that Labor moves forward with an impact evaluation for the one-stop system under WIA as we recommended in 2004, and that the evaluation’s sampling methodology includes veterans in sufficient numbers to allow analysis of the impact of services to veterans in the one-stop system, including those served by the DVOP and LVER programs. We provided a draft of this report to Labor for review and comment. In its comments, Labor generally concurred with our findings, conclusions, and recommendations and expressed appreciation that the report acknowledges the steps the agency has taken to improve the quality of performance data and better understand outcomes for veterans. Labor noted that it is considering adopting a different approach to measuring outcomes for the DVOP and LVER programs by program year 2008—one that may take into account the similar veteran populations served, as well as outreach to employers. As it develops this new approach, Labor reported that it will also introduce a system of weighted measures that will emphasize services to veterans with barriers to employment. These changes will coincide with the implementation of Labor’s proposed integrated data system, WISPR. Labor also noted that it would work with states and grantees to ensure a smooth transition to the new system. In addition, Labor stated that it intends to pursue a WIA impact evaluation, which will allow for analysis of services to sub-populations, including veterans. Labor reported that our recommendation to develop proposals for changing veteran definitions across employment and training programs must be evaluated with the input of other agencies. Labor also provided technical comments that we incorporated where appropriate. Labor’s comments are reproduced in full in appendix IV. We will send copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties and will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. A list of related GAO products is included at the end of this report. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or at nilsens@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix V. The objectives of this report were to determine (1) the extent to which DVOP and LVER performance information reflects services and outcomes for veterans served by these programs, (2) the extent to which performance information on veterans served by other key programs is comprehensive and well integrated across programs in the one-stop system, and (3) what Labor is doing to improve the quality of performance data and better understand outcomes for veteran job seekers. To address these objectives, we conducted a nationwide Web-based survey to state workforce administrators in the 50 states and the District of Columbia; conducted site visits to 3 states, during which we interviewed state and federal officials, one-stop managers, and program staff; interviewed Labor officials from both the Veterans’ Employment and Training Service (VETS) and the Employment and Training Administration (ETA); analyzed relevant performance data from ETA and VETS; and reviewed our previous work on attributes of successful performance measures. We conducted our work in accordance with generally accepted government auditing standards between May 2006 and April 2007. To obtain further information on our objectives, we surveyed state workforce administrators from November 15 to December 27, 2006. The survey addressed all three objectives and included questions about performance information for the DVOP and LVER programs, integration of data across employment programs serving veterans, and efforts to ensure data quality. We developed the survey based on knowledge obtained during our preliminary research. This included a literature review and initial interviews with officials from the Department of Labor, the National Association of State Workforce Agencies (NASWA), and the state of New Hampshire, where we conducted our initial site visit. We then obtained a list of state workforce administrators from NASWA. We asked state administrators to provide information on the DVOP and LVER programs’ capacity, other programs within the one-stop system that serve veteran job seekers, performance measures and data; and challenges to managing the programs. To determine whether respondents would understand the questions as intended, we pretested the survey with state officials in 5 states. We then made changes to the questions based on comments we received during the pretests. The survey was conducted using self-administered electronic Web-based questionnaires. We sent notification of the survey to the 50 states and the District of Columbia in November 2006 and followed up with e-mail messages and telephone calls as necessary during November and December. All 51 recipients submitted their responses by the end of December 2006, providing us with a response rate of 100 percent. We did not independently verify information obtained through the survey. During our data analysis we held three follow-up conversations to fill in gaps from incomplete survey information. Because this survey was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or how the data are entered into a database can introduce unwanted variability into the survey results. We took steps during survey development, data collection, and data analysis to minimize these nonsampling errors. For example, we pretested the questionnaire to ensure that questions were clear and understandable. Since this was a Web-based survey in which respondents entered their responses directly into out database, there was little possibility of data entry error. During data analysis, a second, independent analyst checked all computer programming. Also, to the extent possible, we compared both closed and open ended survey responses with our site visit observations. While survey results are also subject to different types of systematic errors or bias, we do not have reason to believe that respondents falsely reported any information for this survey. To obtain a detailed understanding of how veteran job seekers are served by the one-stop system and how their information is captured, we conducted visits to three states: New Hampshire, California, and Tennessee. We selected these states based on a range of selection criteria, including geographic dispersion, state size and veteran demographics, recent state performance in veterans’ programs, and recommendations by Labor and NASWA. Our site visits at the state level included interviews with state workforce agency officials and state directors of Veterans’ Employment and Training. We also chose two local one-stops in each state and met with local managers and veteran program staff (see table 6). During each interview, we used standard interview protocols to obtain detailed and comparable information. In our interview with state workforce officials, we discussed the role of the state workforce agency in administering veterans’ employment and training programs, details about the programs serving veteran job seekers, views on the current performance accountability system, and information about data collection and validation. In our interviews with the state directors and their staff, we discussed their oversight roles and responsibilities, relationship with the state workforce agency, and views on the current performance accountability system and data collection. At the local one-stops, we discussed the coordination of veteran staff with other programs within the one-stop system, priority of service, and data collection and reporting. In each state, we also received a tutorial of the state’s data collection software. We conducted our site visits between July and November 2006. As part of our work, we interviewed officials of ETA and VETS, including all six Regional Administrators of VETS. We conducted these telephone interviews in the following locations: Boston, Atlanta, Dallas, Chicago, San Francisco, and Philadelphia. During each interview, we obtained information on regional differences in administering the DVOP and LVER programs, views on the current performance measures, and information on Labor’s monitoring role in each state. We also analyzed performance data from the DVOP, LVER, and Employment Service programs and reviewed Labor’s guidance. In addition, we reviewed relevant literature, including our past work on attributes of successful performance measures. We also interviewed representatives of NASWA and two private-sector staffing agencies. baseline performance data by state for the DVOP and LVER programs from benchmark program year 2005 (July 1, 2005–June 30, 2006) and negotiated goals by state for the following year, program year 2006. Labor and states did not negotiate goals for the DVOP or LVER programs for program year 2005, which was a baseline year for performance under the new common measures. Four performance measures contribute to each program’s performance. For the DVOP program, there is one set of measures for all veterans and one set for disabled veterans. For the LVER program, there is a set of measures for all veterans and another set for recently separated veterans. Each set of measures includes entered employment rate (EER): the number of participants who are employed in the first quarter after the exit quarter divided by the number of participants who exit during the quarter and employment retention rate (ERR): the number of participants who are employed in both the second and third quarters after the exit quarter divided by the number of adult participants who exit during the quarter. These figures were provided by the Department of Labor. GAO has not verified the accuracy or reliability of these data. This table illustrates the negotiated goals and performance achieved by each state for program year 2005 for veterans in the Wagner-Peyser-funded Employment Service. It includes the entered employment and employment retention rates for all veterans and disabled veterans within the Employment Service, including those in the DVOP and LVER programs. These figures were provided by the Department of Labor. GAO has not verified the accuracy or reliability of these data. Dianne Blank, Assistant Director Rebecca Woiwode, Analyst-in-Charge Chris Morehouse and Beth Faraguna made significant contributions to this report in all facets of the work. In addition, Walter Vance assisted in the design of the national survey; Gloria Hernandez-Saunders helped with data analysis; Meeta Engle lent subject matter expertise; Jessica Botsford and Richard Burkard provided legal support; and Charlie Willson provided writing assistance. Trade Adjustment Assistance: Labor Should Take Action to Ensure Performance Data Are Complete, Accurate, and Accessible. GAO-06-496. Washington, D.C.: April 25, 2006. Veterans’ Employment and Training Service: Greater Accountability and Other Labor Actions Needed to Better Serve Veterans. GAO-06-357T. Washington, D.C.: February 2, 2006. Veterans’ Employment and Training Service: Labor Actions Needed to Improve Accountability and Help States Implement Reforms to Veterans’ Employment Services. GAO-06-176. Washington, D.C.: December 30, 2005. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Veterans’ Employment and Training Service: Preliminary Observations on Changes to Veterans’ Employment Programs. GAO-05-662T. Washington, D.C.: May 12, 2005. Performance Measurement and Evaluation: Definitions and Relationships. GAO-05-739SP. Washington, D.C.: May 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Performance Reporting: Few Agencies Reported on the Completeness and Reliability of Performance Data. GAO-02-372. Washington, D.C.: April 26, 2002. Veterans’ Employment and Training Service: Flexibility and Accountability Needed to Improve Service to Veterans. GAO-01-928. Washington, D.C.: September 12, 2001. Veterans’ Employment and Training Service: Proposed Performance Measurement System Improved, but Further Changes Needed. GAO-01-580. Washington, D.C.: May 15, 2001. Veterans’ Employment and Training Service: Strategic and Performance Plans Lack Vision and Clarity. GAO/T-HEHS-99-177. Washington, D.C.: July 29, 1999. Veterans’ Employment and Training Service: Assessment of the Fiscal Year 1999 Performance Plan. GAO/HEHS-98-240R. Washington, D.C.: September 30, 1998. Veterans’ Employment and Training: Services Provided by Labor Department Programs. GAO/HEHS-98-7. Washington, D.C.: October 17, 1997. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans. GAO/GGD-10.1.20. Washington, D.C.: April 1998. Agencies’ Annual Performance Plans under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking. GAO/GDD/AIMD-10.1.18. Washington, D.C.: February 1998. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996.
In 2002, Congress enacted the Jobs for Veterans Act (JVA), which modified two Department of Labor (Labor) programs that specifically target veteran job seekers: the Disabled Veterans' Outreach Program (DVOP) and the Local Veterans' Employment Representative (LVER) program. However, questions have been raised about the adequacy of performance information on services to veterans by these and other employment programs. In this report, GAO examined (1) the extent to which DVOP and LVER performance information reflects services and outcomes for veterans; (2) the extent to which performance information on veterans paints a clear picture of their use of one-stop services; and (3) what Labor is doing to improve the quality of performance data and better understand program impact and outcomes for veterans. Performance information for the DVOP and LVER programs provides some sense of services and outcomes for veterans, but is weakened by several factors. In July 2005, Labor adopted new performance measures for the programs, but not all have been fully implemented. For example, states are held accountable for helping veterans get and keep jobs, but are not yet held accountable for their average earnings once employed, as they are for other programs. Additionally, having separate performance measures for the DVOP and LVER programs fails to acknowledge the similarity of the populations they serve and duties they perform. Furthermore, it is difficult to assess outcomes over time or across states because of frequent changes in reporting requirements that prevent establishing reliable trend data. Labor's data on veteran job seekers paint an unclear picture of their use of other employment and training services in the one-stop system, despite the use of common performance measures across programs. Although many veterans use services other than those provided by the DVOP and LVER programs, key employment programs vary in how well their data on veteran job seekers are shared across programs, making it difficult to know how many veterans are served. In addition, statutory differences in the definitions of veterans hinder efforts to standardize data across employment programs. Moreover, Labor has no means of assessing whether priority of service for veterans has been implemented in various employment programs. Labor has taken some steps to improve the quality of performance data and better understand outcomes for veterans. For example, Labor requires states to validate key performance data. Labor has also planned an integrated data reporting system that would track individual veterans' progress through the one-stop system. However, states have raised concerns about the timelines and its current implementation date is unclear. Furthermore, while outcome information on veterans is helpful, it cannot measure whether the outcomes are due to the program or other factors. While Labor has sponsored research on services to veterans, it has not yet conducted the impact evaluation required by law to assess the effectiveness of one-stop services.
Mr. Chairman, Representative Shadegg, and Members of the Subcommittee: We are pleased to be here today to discuss with the Subcommittee the results of work done on the burden that business and individual taxpayers face in complying with federal tax requirements. Because of concerns about business taxpayer burden, we identified sources of compliance burden and examined the feasibility of obtaining reliable dollar estimates of the compliance costs borne by business taxpayers. We have defined burden as the time taxpayers spend, monetary costs they incur, and frustrations they experience in complying with tax requirements. Because individual taxpayers may also face compliance burdens, we are currently reviewing alternative tax filing procedures to identify possible benefits to taxpayers and challenges presented by such alternatives. Although that work is incomplete, we can share some information about individual tax burden issues. To provide a perspective on business taxpayer burden, we collected information on compliance burden from the management and tax staffs of selected businesses, tax accountants, tax lawyers, representatives of tax associations, and officials of the Internal Revenue Service (IRS). The corporate businesses we met with varied by geographical location, size, and industry type. There are several points we will discuss today. First, according to the businesses we interviewed, the complexity of the Internal Revenue Code, compounded by the frequent changes made to the code, is the driving force behind business tax compliance burden. Second, a reliable estimate of the overall costs of tax compliance would be costly and in itself burdensome on businesses to obtain. Finally, reducing the compliance burden on businesses and individual taxpayers will be a difficult undertaking because of the various policy trade-offs, such as revenue and taxpayer equity, that must be made. While discussing with us the many issues associated with compliance burden, the business officials and tax experts also acknowledged the legitimate purposes and requirements of the tax system. They said that filing tax returns and paying taxes were all part of doing business. But most firmly believed there must be easier ways to achieve the goals of the federal tax system. Business officials and tax experts told us that, overall, the federal tax code is complex, difficult to understand, and in some cases indecipherable. They also said it was burdensome to comply with the code because of the additional record-keeping and calculations that the code requires. More specifically, they said businesses have difficulty with the code because of numerous and unwieldy cross-references and overly broad, imprecise, and ambiguous language. Such language, they said, appears to be designed to cover every conceivable case but leads to much taxpayer confusion and frequent misinterpretation of the code. Frequent legislative changes, including the effects of these changes on other sections of the code, were also cited as problematic. Respondents said that the frequent and large number of legislative changes make it difficult for businesses to keep current on provisions that apply to their specific situations. For example, 1 year after the expansive Tax Reform Act of 1986, the Omnibus Budget Reconciliation Act of 1987 changed about 50 provisions that potentially affected business tax compliance. Business officials and tax experts said it was their perception that these frequent changes were designed to fix loopholes or perceived abuses; yet, in making these changes, Congress appeared not to have considered the impact they have on other sections of the code. These same parties expressed frustration about provisions with finite lives being left to expire but later reauthorized. These are tax provisions that may contain sunset clauses to encourage future reevaluation. And while recognizing the value of these provisions, business officials and tax experts said informed business decisions are difficult to make without knowing a provision’s fate. Each of these concerns about changes to the tax code added to the uncertainty businesses face in attempting to understand and comply with the tax code. The tax code also can create the need to establish and maintain numerous and sometimes duplicate sets of financial records. For example, all of the 17 businesses we spoke with said depreciation requirements caused them to maintain detailed records solely for tax purposes. For a given set of assets, some companies need to produce one set of computations and records for the regular federal tax and two additional sets for the federal Alternative Minimum Tax (AMT). Many businesses are also required to produce additional depreciation computations and records for state and local income and property tax purposes. Complexities in the code can also result in the need to complete time-consuming calculations. Among these, respondents frequently mentioned the calculations associated with the uniform capitalization rules, the AMT, and other provisions that force taxpayers to trace the many categories of interest expense and apply a separate tax treatment to each category. Our respondents also indicated that the compliance burden imposed by the federal tax system was made greater by the interplay of state and local tax requirements that at times were inconsistent with each other as well as with the federal code. Among the problems cited by businesses were different definitions of wages, income, and certain deductions; different methods for calculating depreciation; and inconsistent requirements for payroll reporting and timing of deposits. While the focus of our discussions was on the federal tax burden, some of the business respondents said that the compliance burden associated with state and local tax requirements exceeded the burden of the federal system. Some business officials and tax experts also cited IRS’ administration of the federal tax code as contributing to business compliance burden, although to a lesser extent than the complexity of and frequent changes to the code. Of those who cited difficulties with IRS, problems identified were with the tax knowledge of IRS auditors, the clarity of IRS’ correspondence and notices, and the amount of time IRS takes to issue regulations. The complexity of the code has a direct impact on IRS’ ability to administer the code. The volume and complexity of information in the code make it difficult for IRS to ensure that its tax auditors are knowledgeable about the tax code and that their knowledge is current. Some business officials and tax experts said that IRS auditors lack sufficient knowledge about federal tax requirements, and in their opinion this deficiency has caused IRS audits to take more time than they otherwise might. However, other respondents said that IRS auditors were reasonable to work with. IRS recognizes the difficulty of maintaining a workforce of auditors who fully understand all tax requirements. IRS is developing a program to encourage auditors to become industry specialists so that they can increase their understanding of industry environments, accounting practices, and tax issues. experts said that the complexity of the forms and publications and the lack of clarity of correspondence and notices resulted in frustrating and burdensome experiences for the taxpayers. They said that business compliance burden is increased as businesses attempt to understand and respond to those notices and letters. Our last detailed examinations of IRS notices, forms, and publications, in December 1994, revealed continuing problems with these documents. IRS has been making efforts to resolve some of those problems. Respondents also identified difficulties in complying with the code because regulations were not always available from IRS in a timely manner. IRS officials said that the amount of time that passes before a final regulation is issued varies, but it can take several years or longer. According to the officials, the amount of time is a product of the complexity of the particular tax provision, the process of obtaining and analyzing public comment on proposed regulations, and the priority IRS assigns to issuing the regulation. For many tax provisions businesses depend upon IRS regulations for guidance in complying with the code and correspondingly reducing their burden. Without timely regulations, according to some respondents, businesses must guess at the proper application of the law and then at times amend their decisions when the regulations are finally issued. Moving next, Mr. Chairman, to the overall cost to businesses of complying with the tax code, we did not identify a readily available, reliable estimate of such costs. While there was a general consensus that compliance is burdensome and some businesses offered anecdotal examples of their costs, our discussions with businesses and review of available studies indicate that developing a reliable estimate would require that several practical and severe problems be overcome. These problems include working with a broad spectrum of businesses to accurately separate tax costs from other costs and obtaining accurate and consistent responses from businesses on tax burden questions. This kind of inquiry would be an expensive and burdensome process in itself. primarily for tax reasons. More often, tax considerations affected the timing or structure of a business action not whether the action would occur. For example, in acquiring a business equipment would consider tax implications in terms of whether to buy or lease the equipment. Few of the businesses we spoke with could readily separate tax compliance costs from other costs of doing business. The integration of the tax compliance activities with other business activities makes it difficult and time-consuming to collect the information necessary from businesses to generate reliable cost estimates. For example, businesses said it would be difficult to take payroll expenditures and isolate those associated with tax compliance. Further, business respondents said that they do not routinely need, thus it does not make sense for them to collect, information on compliance costs. And, to separate tax compliance costs from other costs of doing business would be burdensome and of questionable usefulness to them. A few business officials provided estimates of some compliance costs, such as legal fees, payroll management fees, and tax software expenditures, but expressed limited confidence in their ability to provide accurate, comprehensive cost data. In addition, those few businesses that said they could isolate some of their tax compliance costs indicated that even in their cases, it would be difficult to separate federal compliance costs from state and local compliance costs. While we did not identify existing reliable business tax burden cost estimates, there was consensus among the business respondents, tax experts, and the literature that tax compliance burden is significant and that it can be reduced. Although some gains can be made by reducing administrative burden imposed by IRS, the greatest potential for reducing the compliance burden of business taxpayers is by dealing with the complexity of the tax code. provisions has the potential for reducing the compliance burden of many businesses. Another approach that has been proposed is to completely overhaul the tax code by replacing the current income tax with some form of consumption tax. In considering changes to the tax code, whether they be limited in nature or comprehensive, legislators need to weigh several sometimes competing concerns. These include the revenue implications of any change, the need to address equity and fairness, and the desire to achieve social and economic goals. The tension in achieving balance among these trade-offs and at the same time making it easier for taxpayers to comply presents a significant challenge to Congress. The tax system is burdensome for many individuals as well as for businesses. Almost 100 million American taxpayers currently must file individual tax returns, even though most have fully paid their taxes through the withholding system. To assist the Congress in identifying options for reducing taxpayer burden and IRS paper processing, we are in the process of studying return-free filing systems and the potential impact they would have on the federal income tax system. While we are still finalizing our results, we can provide some preliminary information on (1) the two most common types of return-free filing used in other countries, (2) the number of individual American taxpayers that could be affected by these two types of return-free filing, and (3) some of the issues that would need to be addressed if these systems were to be considered. In countries with return-free filing, the most common type of system we identified was one that we termed “final withholding.” Under this system, the withholder of income taxes—for example an employer—is to determine the taxpayer’s liability and withhold the correct tax liability from the taxpayer. With the final year-end payment to the taxpayer, the withholder is to make a final reconciliation of taxes and adjust the withholding for that period to equal the year’s taxes. on the tax liability and the amount of withholding. We identified 36 countries that use one of these two forms of return-free filing—34 with final withholding and 2 with tax agency reconciliation. Given the extent of withholding and information reporting that exists under our current tax system, we estimated that about 18.5 million American taxpayers whose incomes derive from only one employer could be covered under a final withholding system. An estimated 51 million taxpayers could be covered under an agency reconciliation system. We estimated that taxpayers could save millions of hours in preparation time and millions of dollars in tax preparation costs under either the final withholding system or the tax agency reconciliation system. We also estimated that IRS would save about $45 million in processing costs under the final withholding system, and about $36 million under the tax agency reconciliation system, in processing and compliance costs. However, employers would face substantial additional burden and costs under the final withholding system and the tax preparation industry could be adversely affected under either system. Furthermore, several changes to the current tax system would be needed in order to implement either form of return-free filing. Under both systems, taxpayers would continue to provide information to IRS on their filing status and number of dependents. Employers would need to be authorized by law to compute and remit tax liabilities under final withholding and they would have to set up payroll procedures to do so. Consideration would also need to be given to the impact of these systems on certain states where the state income tax is tied to the federal income tax return. For example, IRS would have to speed up the processing of information documents under the tax agency reconciliation system so that tax liabilities could be determined before April 15, which is also the tax filing deadline for some states. IRS’ own 1987 study of return-free filing recognized this processing problem and recommended against a tax agency reconciliation return-free filing system for that reason. As IRS improves its information processing capabilities, return-free filing may become more feasible. evaluation of ways in which tax compliance burden can be reduced is an important contribution to improving our tax system. Mr. Chairman, Representative Shadegg, this concludes our prepared statement. We would be pleased to answer any questions. Our approach for (1) identifying the sources of compliance burden for businesses and (2) determining the feasibility of obtaining reliable estimates of the compliance costs borne by businesses was to review and assess the literature on tax compliance burden to identify issues and to conduct in-depth interviews of businesses and tax experts to obtain their views on compliance burden. We reviewed about 25 commonly recognized studies from the literature on compliance costs and tax simplification. These studies provided information on how businesses comply with tax laws, the areas they find more difficult to comply with, causes for some of the tax compliance burden experienced by businesses, and suggestions for reducing compliance burden. We interviewed business officials and tax experts to obtain detailed information on actual taxpayer experiences in complying with federal, state, and local tax requirements and to determine if companies could collect reliable taxpayer compliance cost data. These included interviews with tax and management officials of 17 businesses, three panels of tax accountants from the American Institute of Certified Public Accountants (AICPA), and a panel of tax lawyers from the American Bar Association (ABA) Tax Section. We also talked with representatives of tax associations and IRS officials to obtain their views on the reasons for tax compliance burden. We selected the 17 businesses to include a variety of geographic regions, industry types, and sizes, rather than to construct a statistical sample of businesses. The 17 companies were headquartered in 6 states across the country—California, Georgia, Maryland, New York, Ohio, and Virginia. They included a wide variety of industry types, such as manufacturing, services, telecommunications, and retail operations. We chose to focus, for the most part, on medium-sized companies because, among other things, relatively little past research has focused on this subgroup. Our sample included, however, a few large corporations and some relatively small businesses. Most of the 17 businesses were judgmentally selected from public databases that list publicly traded and privately held corporations. Table 1 summarizes the characteristics of the 17 companies we interviewed. encountered while complying with federal, state, and local tax systems. Moreover, our results on the sources of tax compliance burden are consistent with the information found in the literature we reviewed. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed business and individual taxpayers' federal tax compliance burden. GAO noted that: (1) the compliance burden is due to the tax code's complexity, ambiguous language, and frequent changes; (2) many businesses are uncertain about what they must do to comply with the code; (3) recordkeeping, time-consuming calculations, the interplay of state and local tax requirements, and Internal Revenue Service's (IRS) administration of the tax code add to the burden; (4) estimating businesses' tax compliance burden and costs would be difficult, since businesses do not collect the data needed to make reliable cost estimates of their compliance; (5) the greatest reduction in the tax compliance burden could be gained by simplifying the tax code; (6) return-free filing alternatives used in other countries could reduce individual taxpayers' tax compliance and IRS administrative burdens, but employers, tax preparers, and state tax systems could be further burdened or adversely affected; and (7) reducing businesses' and individuals' tax compliance burdens will be difficult because of tax policy tradeoffs, such as revenue, equity, and social and economic issues.
Wildland fire triggered by lightning is a normal, inevitable, and necessary ecological process that nature uses to periodically remove excess undergrowth, small trees, and vegetation to renew ecosystem productivity. However, various human land use and management practices, including several decades of fire suppression activities, have reduced the normal frequency of wildland fires in many forest and rangeland ecosystems and have resulted in abnormally dense and continuous accumulations of vegetation that can fuel uncharacteristically large and intense wildland fires. Such large intense fires increasingly threaten catastrophic ecosystem damage and also increasingly threaten human lives, health, property, and infrastructure in the wildland-urban interface. Federal researchers estimate that vegetative conditions that can fuel such fires exist on approximately 190 million acres––or more than 40 percent––of federal lands in the contiguous United States but could vary from 90 million to 200 million acres, and that these conditions also exist on many nonfederal lands. Our reviews over the last 5 years identified several weaknesses in the federal government’s management response to wildland fire issues. These weaknesses included the lack of a national strategy that addressed the likely high costs of needed fuel reduction efforts and the need to prioritize these efforts. Our reviews also found shortcomings in federal implementation at the local level, where over half of all federal land management units’ fire management plans did not meet agency requirements designed to restore fire’s natural role in ecosystems consistent with human health and safety. These plans are intended to identify needed local fuel reduction, preparedness, suppression, and rehabilitation actions. The agencies also lacked basic data, such as the amount and location of lands needing fuel reduction, and research on the effectiveness of different fuel reduction methods on which to base their fire management plans and specific project decisions. Furthermore, coordination among federal agencies and collaboration between these agencies and nonfederal entities were ineffective. This kind of cooperation is needed because wildland fire is a shared problem that transcends land ownership and administrative boundaries. Finally, we found that better accountability for federal expenditures and performance in wildland fire management was needed. Agencies were unable to assess the extent to which they were reducing wildland fire risks or to establish meaningful fuel reduction performance measures, as well as to determine the cost- effectiveness of these efforts, because they lacked both monitoring data and sufficient data on the location of lands at high risk of catastrophic fires to know the effects of their actions. As a result, their performance measures created incentives to reduce fuels on all acres, as opposed to focusing on high-risk acres. Because of these weaknesses, and because experts said that wildland fire problems could take decades to resolve, we said that a cohesive, long- term, federal wildland fire management strategy was needed. We said that this cohesive strategy needed to focus on identifying options for reducing fuels over the long term in order to decrease future wildland fire risks and related costs. We also said that the strategy should identify the costs associated with those different fuel reduction options over time, so that the Congress could make cost-effective, strategic funding decisions. The federal government has made important progress over the last 5 years in improving its management of wildland fire. Nationally it has established strategic priorities and increased resources for implementing these priorities. Locally, it has enhanced data and research, planning, coordination, and collaboration with other parties. With regard to accountability, it has improved performance measures and established a monitoring framework. Over the last 5 years, the federal government has been formulating a national strategy known as the National Fire Plan, composed of several strategic documents that set forth a priority to reduce wildland fire risks to communities. Similarly, the recently enacted Healthy Forests Restoration Act of 2003 directs that at least 50 percent of funding for fuel reduction projects authorized under the act be allocated to wildland-urban interface areas. While we have raised concerns about the way the agencies have defined these areas and the specificity of their prioritization guidance, we believe that the act’s clarification of the community protection priority provides a good starting point for identifying and prioritizing funding needs. Similarly, in contrast to fiscal year 1999, when we reported that the Forest Service had not requested increased funding to meet the growing fuel reduction needs it had identified, fuel reduction funding for both the Forest Service and Interior quadrupled by fiscal year 2004. The Congress, in the Healthy Forests Restoration Act, also authorized $760 million per year to be appropriated for hazardous fuels reduction activities, including projects for reducing fuels on up to 20 million acres of land. Moreover, appropriations for both agencies’ overall wildland fire management activities, including preparedness, suppression and rehabilitation, have nearly tripled, from about $1 billion in fiscal year 1999 to over $2.7 billion in fiscal year 2004. The agencies have strengthened local wildland fire management implementation by making significant improvements in federal data and research on wildland fire over the past 5 years, including an initial mapping of fuel hazards nationwide. Additionally, in 2003, the agencies approved funding for development of a geospatial data and modeling system, called LANDFIRE, to map wildland fire hazards with greater precision and uniformity. LANDFIRE—estimated to cost $40 million and scheduled for nationwide implementation in 2009––will enable comparisons of conditions between different field locations nationwide, thus permitting better identification of the nature and magnitude of wildland fire risks confronting different community and ecosystem resources, such as residential and commercial structures, species habitat, air and water quality, and soils. The agencies also have improved local fire management planning by adopting and executing an expedited schedule to complete plans for all land units that had not been in compliance with agency requirements. The agencies also adopted a common interagency template for preparing plans to ensure greater consistency in their contents. Coordination among federal agencies and their collaboration with nonfederal partners, critical to effective implementation at the local level, also has been improved. In 2001, as a result of congressional direction, the agencies jointly formulated a 10-Year Comprehensive Strategy with the Western Governors’ Association to involve the states as full partners in their efforts. An implementation plan adopted by the agencies in 2002 details goals, time lines, and responsibilities of the different parties for a wide range of activities, including collaboration at the local level to identify fuel reduction priorities in different areas. Also in 2002, the agencies established an interagency body, the Wildland Fire Leadership Council, composed of senior Agriculture and Interior officials and nonfederal representatives, to improve coordination of their activities with each other and nonfederal parties. Accountability for the results the federal government achieves from its investments in wildland fire management activities also has been strengthened. The agencies have adopted a performance measure that identifies the amount of acres moved from high-hazard to low-hazard fuel conditions, replacing a performance measure for fuel reductions that measured only the total acres of fuel reductions and created an incentive to treat less costly acres rather than the acres that presented the greatest hazards. Additionally, in 2004, to have a better baseline for measuring progress, the Wildland Fire Leadership Council approved a nationwide framework for monitoring the effects of wildland fire. While an implementation plan is still needed for this framework, it nonetheless represents a critical step toward enhancing wildland fire management accountability. While the federal government has made important progress over the past 5 years in addressing wildland fire, a number of challenges still must be met to complete development of a cohesive strategy that explicitly identifies available long-term options and funding needed to reduce fuels on the nation’s forests and rangelands. Without such a strategy, the Congress will not have an informed understanding of when, how, and at what cost wildland fire problems can be brought under control. None of the strategic documents adopted by the agencies to date have identified these options and related funding needs, and the agencies have yet to delineate a plan or schedule for doing so. To identify these options and funding needs, the agencies will have to address several challenging tasks related to their data systems, fire management plans, and assessing the cost-effectiveness and affordability of different options for reducing fuels. The agencies face several challenges to completing and implementing LANDFIRE, so that they can more precisely identify the extent and location of wildland fire threats and better target fuel reduction efforts. These challenges include using LANDFIRE to better reconcile the effects of fuel reduction activities with the agencies’ other stewardship responsibilities for protecting ecosystem resources, such as air, water, soils, and species habitat, which fuel reduction efforts can adversely affect. The agencies also need LANDFIRE to help them better measure and assess their performance. For example, the data produced by LANDFIRE will help them devise a separate performance measure for maintaining conditions on low-hazard lands to ensure that their conditions do not deteriorate to more hazardous conditions while funding is being focused on lands with high-hazard conditions. In implementing LANDFIRE, however, the agencies will have to overcome the challenges presented by the current lack of a consistent approach to assessing the risks of wildland fires to ecosystem resources as well as the lack of an integrated, strategic, and unified approach to managing and using information systems and data, including those such as LANDFIRE, in wildland fire decision making. Currently, software, data standards, equipment, and training vary among the agencies and field units in ways that hamper needed sharing and consistent application of the data. Also, LANDFIRE data and models may need to be revised to take into account recent research findings that suggest part of the increase in wildland fire in recent years has been caused by a shift in climate patterns. This research also suggests that these new climate patterns may continue for decades, resulting in further increases in the amount of wildland fire. Thus, the nature, extent, and geographical distribution of hazards initially identified in LANDFIRE, as well as the costs for addressing them, may have to be reassessed. The agencies will need to update their local fire management plans when more detailed, nationally consistent LANDFIRE data become available. The plans also will have to be updated to incorporate recent agency fire research on approaches to more effectively address wildland fire threats. For example, a 2002 interagency analysis found that protecting wildland- urban interface communities more effectively—as well as more cost- effectively—might require locating a higher proportion of fuel reduction projects outside of the wildland-urban interface than currently envisioned, so that fires originating in the wildlands do not become too large to suppress by the time they arrive at the interface. Moreover, other agency research suggests that placing fuel reduction treatments in specific geometric patterns may, for the same cost, provide protection for up to three times as many community and ecosystem resources as do other approaches, such as placing fuel breaks around communities and ecosystems resources. Timely updating of fire management plans with the latest research findings on optimal design and location of treatments also will be critical to the effectiveness and cost-effectiveness of these plans. The Forest Service indicated that this updating could occur during annual reviews of fire management plans to determine whether any changes to them may be needed. Completing the LANDFIRE data and modeling system and updating fire management plans should enable the agencies to formulate a range of options for reducing fuels. However, to identify optimal and affordable choices among these options, the agencies will have to complete certain cost-effectiveness analysis efforts they currently have under way. These efforts include an initial 2002 interagency analysis of options and costs for reducing fuels, congressionally-directed improvements to their budget allocation systems, and a new strategic analysis framework that considers affordability. The Interagency Analysis of Options and Costs: In 2002, a team of Forest Service and Interior experts produced an estimate of the funds needed to implement eight different fuel reduction options for protecting communities and ecosystems across the nation over the next century. Their analysis also considered the impacts of fuels reduction activities on future costs for other principal wildland fire management activities, such as preparedness, suppression, and rehabilitation, if fuels were not reduced. The team concluded that the option that would result in reducing the risks to communities and ecosystems across the nation could require an approximate tripling of current fuel reduction funding to about $1.4 billion for an initial period of a few years. These initially higher costs would decline after fuels had been reduced enough to use less expensive controlled burning methods in many areas and more fires could be suppressed at lower cost, with total wildland fire management costs, as well as risks, being reduced after 15 years. Alternatively, the team said that not making a substantial short-term investment using a landscape focus could increase both costs and risks to communities and ecosystems in the long term. More recently, however, Interior has said that the costs and time required to reverse current increasing risks may be less when other vegetation management activities—such as timber harvesting and habitat improvements—are considered that were not included in the interagency team’s original assessment but also can influence wildland fire. The cost of the 2002 interagency team’s option that reduced risks to communities and ecosystems over the long term is consistent with a June 2002 National Association of State Foresters’ projection of the funding needed to implement the 10-Year Comprehensive Strategy developed by the agencies and the Western Governors’ Association the previous year. The state foresters projected a need for steady increases in fuel reduction funding up to a level of about $1.1 billion by fiscal year 2011. This is somewhat less than that of the interagency team’s estimate, but still about 2-1/2 times current levels. The interagency team of experts who prepared the 2002 analysis of options and associated costs said their estimates of long-term costs could only be considered an approximation because the data used for their national-level analysis were not sufficiently detailed. They said a more accurate estimate of the long-term federal costs and consequences of different options nationwide would require applying this national analysis framework in smaller geographic areas using more detailed data, such as that produced by LANDFIRE, and then aggregating these smaller-scale results. The New Budget Allocation System: Agency officials told us that a tool for applying this interagency analysis at a smaller geographic scale for aggregation nationally may be another management system under development—the Fire Program Analysis system. This system, being developed in response to congressional committee direction to improve budget allocation tools, is designed to identify the most cost-effective allocations of annual preparedness funding for implementing agency field units’ local fire management plans. Eventually, the Fire Program Analysis system, being initially implemented in 2005, will use LANDFIRE data and provide a smaller geographical scale for analyses of fuel reduction options and thus, like LANDFIRE, will be critical for updating fire management plans. Officials said that this preparedness budget allocation systemwhen integrated with an additional component now being considered for allocating annual fuel reduction funding—could be instrumental in identifying the most cost-effective long-term levels, mixes, and scheduling of these two wildland fire management activities. Completely developing the Fire Program Analysis system, including the fuel reduction funding component, is expected to cost about $40 million and take until at least 2007 and perhaps until 2009. The New Strategic Analysis Effort: In May 2004, Agriculture and Interior began the initial phase of a wildland fire strategic planning effort that also might contribute to identifying long-term options and needed funding for reducing fuels and responding to the nation’s wildland fire problems. This effortthe Quadrennial Fire and Fuels Reviewis intended to result in an overall federal interagency strategic planning document for wildland fire management and risk reduction and to provide a blueprint for developing affordable and integrated fire preparedness, fuels reduction, and fire suppression programs. Because of this effort’s consideration of affordability, it may provide a useful framework for developing a cohesive strategy that includes identifying long-term options and related funding needs. The preliminary planning, analysis, and internal review phases of this effort are currently being completed and an initial report is expected in March 2005. The improvements in data, modeling, and fire behavior research that the agencies have under way, together with the new cost-effectiveness focus of the Fire Program Analysis system to support local fire management plans, represent important tools that the agencies can begin to use now to provide the Congress with initial and successively more accurate assessments of long-term fuel reduction options and related funding needs. Moreover, a more transparent process of interagency analysis in framing these options and their costs will permit better identification and resolution of differing assumptions, approaches, and values. This transparency provides the best assurance of accuracy and consensus among differing estimates, such as those of the interagency team and the National Association of State Foresters. In November 2004, the Western Governors’ Association issued a report prepared by its Forest Health Advisory Committee that assessed implementation of the 10-Year Comprehensive Strategy, which the association had jointly devised with the agencies in 2001. Although the association’s report had a different scope than our review, its findings and recommendations are, nonetheless, generally consistent with ours about the progress made by the federal government and the challenges it faces over the next 5 years. In particular, it recommends, as we do, completion of a long-term federal cohesive strategy for reducing fuels. It also cites the need for continued efforts to improve, among other things, data on hazardous fuels, fire management plans, the Fire Program Analysis system, and cost-effectiveness in fuel reductions––all challenges we have emphasized today. The progress made by the federal government over the last 5 years has provided a sound foundation for addressing the problems that wildland fire will increasingly present to communities, ecosystems, and federal budgetary resources over the next few years and decades. But, as yet, there is no clear single answer about how best to address these problems in either the short or long term. Instead, there are different options, each needing further development to understand the trade-offs among the risks and funding involved. The Congress needs to understand these options and tradeoffs in order to make informed policy and appropriations decisions on this 21st century challenge. This is the same message we provided to this subcommittee 5 years ago in calling for a cohesive strategy that identified options and funding needs. But it still has not been completed. While the agencies are now in a better position to do so, they must build on the progress made to date by completing data and modeling efforts underway, updating their fire management plans with the results of these data efforts and ongoing research, and following through on recent cost-effectiveness and affordability initiatives. However, time is running out. Further delay in completing a strategy that cohesively integrates these activities to identify options and related funding needs will only result in increased long-term risks to communities, ecosystems, and federal budgetary resources. Because there is an increasingly urgent need for a cohesive federal strategy that identifies long-term options and related funding needs for reducing fuels, we have recommended that the Secretaries of Agriculture and the Interior provide the Congress, in time for its consideration of the agencies’ fiscal year 2006 wildland fire management budgets, with a joint tactical plan outlining the critical steps the agencies will take, together with related time frames, to complete such a cohesive strategy. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or at nazzaror@gao.gov. Jonathan Altshul, David P. Bixler, Barry T. Hill, Richard Johnson, and Chester Joy made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past two decades, the number of acres burned by wildland fires has surged, often threatening human lives, property, and ecosystems. Past management practices, including a concerted federal policy in the 20th century of suppressing fires to protect communities and ecosystem resources, unintentionally resulted in steady accumulation of dense vegetation that fuels large, intense, wildland fires. While such fires are normal in some ecosystems, in others they can cause catastrophic damage to resources as well as to communities near wildlands known as the wildland-urban interface. GAO was asked to identify the (1) progress the federal government has made in responding to wildland fire threats and (2) challenges it will need to address within the next 5 years. This testimony is based primarily on GAO's report Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy (GAO-05-147), released on February 14, 2005. Over the last 5 years, the Forest Service in the Department of Agriculture and land management agencies in the Department of the Interior, working with the Congress, have made important progress in responding to wildland fires. The agencies have adopted various national strategy documents addressing the need to reduce wildland fire risks; established a priority for protecting communities in the wildland-urban interface; and increased efforts and amounts of funding committed to addressing wildland fire problems, including preparedness, suppression, and fuel reduction on federal lands. In addition, the agencies have begun improving their data and research on wildland fire problems, made progress in developing long-needed fire management plans that identify actions for effectively addressing wildland fire threats at the local level, and improved federal interagency coordination and collaboration with nonfederal partners. The agencies also have strengthened overall accountability for their investments in wildland fire activities by establishing improved performance measures and a framework for monitoring results. While the agencies have adopted various strategy documents to address the nation's wildland fire problems, none of these documents constitutes a cohesive strategy that explicitly identifies the long-term options and related funding needed to reduce fuels in national forests and rangelands and to respond to wildland fire threats. Both the agencies and the Congress need a comprehensive assessment of the fuel reduction options and related funding needs to determine the most effective and affordable long-term approach for addressing wildland fire problems. Completing a cohesive strategy that identifies long-term options and needed funding will require finishing several efforts now under way, each with its own challenges. The agencies will need to finish planned improvements in a key data and modeling system--LANDFIRE--to more precisely identify the extent and location of wildland fire threats and to better target fuel reduction efforts. In implementing LANDFIRE, the agencies will need more consistent approaches to assessing wildland fire risks, more integrated information systems, and better understanding of the role of climate in wildland fire. In addition, local fire management plans will need to be updated with data from LANDFIRE and from emerging agency research on more cost-effective approaches to reducing fuels. Completing a new system designed to identify the most cost-effective means for allocating fire management budget resources--Fire Program Analysis--may help to better identify long-term options and related funding needs. Without completing these tasks, the agencies will have difficulty determining the extent and location of wildland fire threats, targeting and coordinating their efforts and resources, and resolving wildland fire problems in the most timely and cost-effective manner over the long term. A November 2004 report of the Western Governors' Association also called for completing a cohesive federal strategy to address wildland fire problems.
NOAA’s National Weather Service (NWS) manages the approximately 11,000 weather-monitoring stations across the country that are part of the Cooperative Observer Program. Volunteer observers at the stations generally record daily maximum and minimum temperatures and 24-hour precipitation totals and submit the data to NWS over the telephone, by Internet, or by mail. The records for stations in the Cooperative Observer Program can stretch back well over a century, with some records predating the establishment of NWS in 1890. NWS uses data from these Cooperative Observer Program stations to support weather forecasts and warnings and other public service programs. The data are also used by others, including state climatologists, farmers, and resource planners such as energy providers that use weather information to anticipate and plan for varying levels of energy consumption. NOAA’s National Climatic Data Center (NCDC) established the USHCN in 1987 by selecting a subset of weather-monitoring stations from the existing Cooperative Observer Program network of stations. The USHCN currently consists of 1,218 stations. NCDC has twice revised the makeup of stations that compose the USHCN—in 1996 and 2009—primarily to extend the weather records of stations that have closed over time as volunteer observers have discontinued their service. To address this issue, NCDC added data from nearby stations with similar temperature trends that are continuing to gather and report data. In all, NCDC has added over 100 stations as of the latest revision in 2009. NCDC does not have a direct role in managing USHCN stations but relies on NWS’s weather forecast offices throughout the contiguous United States to continue to manage the stations as part of the larger group of weather- monitoring stations in the Cooperative Observer Program. For example, NCDC relies on weather forecast offices to maintain records on the location of the stations and other conditions that can affect weather observations, including the types of equipment used to measure temperature and precipitation and the time of observation. NCDC uses USHCN data to assess and monitor climate variation and change, including to quantify national- and regional-scale temperature trends within the contiguous 48 states. On the basis of its analysis of USHCN data, NCDC estimates that the average surface temperature across the contiguous states has warmed by about 1.4 degrees Fahrenheit since 1895. NCDC’s analysis has also identified areas of the country where temperatures have cooled or remained relatively stable. NCDC combines temperature records from the USHCN with temperature records from weather-monitoring stations around the world to analyze global temperature trends. This analysis has in turn been summarized in the assessment reports of the Intergovernmental Panel on Climate Change, an international body that reviews and assesses the most recent scientific and technical information produced worldwide relevant to the understanding of climate change. NWS headquarters establishes the policies, standards, and requirements for managing the Cooperative Observer Program, and weather forecast offices in six NWS regions (central, eastern, southern, western, Alaska, and Pacific) have responsibility for recruiting and training observers and installing and maintaining temperature-measuring equipment and rain gauges on observers’ properties. NWS applies the same standards and requirements to all stations in its Cooperative Observer Program, including those in the USHCN. In particular, NWS has established siting standards for measuring air temperature to ensure uniformity in meeting national and international requirements for climate observation. The standards, which cover conditions in the immediate vicinity of the stations, specify that temperature-measuring instruments should  not be sited on rooftops;  be installed over level terrain;  be installed at least 100 feet from any extensive concrete or paved surface;  be mounted 4 to 6 feet above the surface; and  be no closer than four times the height of any nearby building, tree, fence, or similar obstruction. NWS guidelines state that implementation of these standards should be flexible and balanced with other factors, such as the availability of space. According to NWS, these siting standards are based in part on recommendations of the World Meteorological Organization, an agency of the United Nations that, among other things, coordinates the activities of member states to generate data and information on weather and climate in accordance with international standards. For example, according to World Meteorological Organization guidelines, the best sites for measuring air temperature are over level ground; freely exposed to sunshine and wind; and not shielded by or close to trees, buildings, and other obstructions. NWS has also established management requirements for weather- monitoring stations that call for inspections of stations and updates of station records to reflect any changes. The requirements for inspections call for a minimum of one inspection by weather forecast office officials per year and specify that during these inspections, the officials are to review observers’ practices for taking weather measurements, check equipment and perform any needed repairs, and assess the conditions surrounding the station, among other things. The management requirements state that, even if there are no changes at a station, officials from weather forecast offices should update each station record at least once every 5 years. To provide a complete and permanent record of a station, NWS has designed an information system that weather forecast offices are to use to record the dates of inspections and update station records. Such records are used by NCDC and other researchers to help interpret weather records from a station and determine how factors such as station location and measurement instruments affect the weather records. Figure 1 depicts the roles of NWS and NCDC in managing the USHCN. According to NCDC officials, achieving a relatively uniform geographic distribution across the contiguous 48 states was a high priority when selecting USHCN stations and was balanced with other factors, including how long stations had collected temperature records, limited periods of missing temperature data, and the stability of measurement conditions. According to NCDC officials, consideration of siting conditions in the immediate vicinity of stations played a limited role in both the initial selections in 1987 and when stations were added in 1996 and 2009 because they considered other factors, such as geographic distribution, to be more important to the analysis of long-term temperature trends. NCDC officials told us that in selecting stations for the USHCN, the agency placed a high priority on achieving geographic distribution across the contiguous 48 states, so that the network could help identify both national and regional warming and cooling trends. To achieve the geographic distribution needed to identify regional trends, according to agency officials, NCDC aimed to select a minimum of two stations from each of the 344 climate divisions across the country. NCDC officials acknowledged that they encountered difficulties achieving the desired geographic distribution in certain areas of western states—such as Nevada—that have a relatively low population density and thus fewer stations to choose from because of a lack of volunteers to serve as observers. As a result, according to NCDC officials, station density is slightly higher across the eastern states than in the western states. Our analysis of all 1,218 USHCN stations (including active stations and those that were inactive or closed) found that while NCDC generally met its aim of two stations per climate division, 14 percent of climate divisions had fewer than two stations. As of April 2011, 20 percent of climate divisions had fewer than two active stations (see fig. 2). According to NCDC officials, the existing climate divisions are only one way to partition the nation’s climate, and if divisions were being developed today, the climate divisions would differ in number and in the areas they cover. According to NCDC officials and documents describing the process used to select USHCN stations in 1987 and to amend the list of stations in 1996 and 2009, the agency also sought stations that had temperature records dating back to the early 20th century, had limited periods of missing data, and had a limited number of station changes, but sometimes made exceptions to these factors.  Number of years of temperature records. In order to detect long-term temperature trends, NCDC aimed to select stations that had a long history of temperature records, ideally dating back to the early 20th century. In some cases, however, NCDC selected stations with a shorter history of temperature records than was ideal to ensure geographic distribution of stations across the contiguous 48 states, according to officials. NCDC officials also told us that they created composite stations to achieve a minimum record length when no stations in a particular geographic area had been collecting temperature records as long as they sought. According to NCDC officials, they create a composite station by combining data from one or two stations that have closed with data from an active station in the same area whose temperature records overlap in time with records from the closed station or stations and continue to the present. NCDC officials told us that they compare the stations’ overlapping temperature trends before creating a composite to help ensure that the climates at the stations are similar. According to NCDC documents, the initial selection of USHCN stations in 1987 included 84 composite stations, and, as of the latest revision to the network in 2009, the number of composite stations had increased to 208, largely in response to station closures. Our analysis of the 1,218 stations that make up the USHCN as of the latest revision in 2009, including composite stations, found that NCDC has largely achieved its desired record length. Specifically, as of 2010, over 85 percent of the stations had a record length dating back more than 100 years, and another 14 percent had temperature record lengths of 76 to 100 years. Less than 1 percent of stations had record lengths of 75 or fewer years.  Extent of missing data. NCDC officials told us that they also attempted to select USHCN stations with limited periods of missing data but that they often had to select stations with incomplete temperature records, including stations that were missing data for multiple years, because few stations have complete records. For example, about half the data from the Little Falls Mill Street station, located in upstate New York, are missing. The station’s record has data for a few years in the 19th century, but data in the intervening years are sparse, with frequent gaps in the middle of the 20th century, according to a 1990 NCDC report on the USHCN. According to NCDC, various factors result in missing data, such as periods when a volunteer observer is not available or when instruments malfunction and need to be repaired. Our analysis of temperature records shows that only 24 of the 1,218 USHCN stations (about 2 percent) have complete temperature data from the time they were established through 2010; the remaining 98 percent of stations are missing an average of 5 percent of temperature data. To generate uninterrupted temperature records, NCDC uses estimates for the missing data based on records from nearby stations in the larger set of Cooperative Observer Program weather-monitoring stations. For example, according to agency officials, NCDC used this process to fill in missing data for the Little Falls Mill Street station. According to NCDC officials, filling in missing data ensures that temperature records from all areas of the contiguous 48 states are represented when the agency uses the USHCN to identify national temperature trends.  Stability of measurement conditions. A final consideration in selecting USHCN stations was NCDC’s desire to maximize the stability of measurement conditions—such as station location, type of temperature-measuring instrument, and time of day when observations were recorded—because such stability makes it easier to discern actual temperature trends at a station. NCDC officials told us that, like stability in other measurement conditions at USHCN stations, stability in siting conditions facilitates officials’ ability to use temperature data to accurately identify long-term warming and cooling trends, even if those conditions do not meet NWS siting standards. Most stations with long temperature records, however, are likely to have undergone multiple changes in measurement conditions. For example, according to NCDC’s records, the Reno, Nevada, USHCN station was originally located at an NWS weather forecast office before being moved in the mid-1930s to an airport and then again in the 1990s to another location at the same airport. According to NCDC, such changes in measurement conditions may cause a rise or drop in the temperatures recorded at stations, which could affect the temperature trends identified using the USHCN. For example, NCDC has studied the impact of a gradual change in the time that observers record temperature measurements from afternoon to morning observation times and concluded that the change has obscured the warming trend across the contiguous 48 states, which would otherwise have appeared more pronounced. NCDC officials told us that they use statistical methods to identify significant shifts in temperature data unrelated to actual trends in temperature and to adjust the data to remove such shifts. According to NCDC officials, all 1,218 USHCN stations have undergone at least one change in measurement conditions requiring such an adjustment, with an average per station of four to five changes. NCDC officials acknowledged that a greater degree of stability of measurement conditions than typically found at USHCN stations would be preferable. As a result, NCDC has established a new network of surface weather-monitoring stations specifically to monitor the nation’s climate— the U.S Climate Reference Network—and is establishing a second one— the U.S. Regional Climate Reference Network (see app. II). According to agency officials, they have developed criteria for selecting locations for stations in the new networks to help ensure a greater degree of station stability in comparison with the USHCN and reduce the need to identify and remove shifts in temperature records that are unrelated to actual warming or cooling. NCDC officials told us these new networks can be used to construct a continuous temperature record with the USHCN once the new networks have a sufficient period of overlap with the USHCN to allow for a comparison of temperature trends. The extent to which stations met specific NWS siting standards played only a limited role in the initial selection of stations for the USHCN in 1987 and when the makeup of the USHCN was revised in 1996 and 2009, according to NCDC officials. NCDC officials told us they considered other factors, such as geographic distribution and a long history of temperature records, to be more important to their ability to analyze long-term temperature trends than strict adherence to NWS’s siting standards. For example, NCDC has included in the USHCN the Central Park station in New York City, which has a temperature record dating to 1876 and has had limited moves, even though current information on the station shows it is encircled by trees. NCDC officials said that, in an effort to consider some information on siting as part of the process of selecting stations, they obtained recommendations from state climatologists and others with detailed knowledge of the siting conditions at stations in their states. NCDC officials told us that another reason siting conditions played a limited role in their initial selection of USHCN stations in 1987 was that NCDC had limited information about siting conditions at the time. NCDC officials said they generally did not visit stations to examine siting conditions, except for a few stations near their headquarters in Asheville, North Carolina, because it was not feasible to do so with so many stations distributed nationwide. In addition, according to NCDC officials, when they first considered stations for inclusion in the USHCN, they had more limited electronic access to station histories and information about siting conditions than they do today. NCDC officials also said that the station histories they did have may not have included all relevant siting information, such as proximity to obstructions. According to NCDC officials, they may have kept some weather-monitoring stations that do not meet specific NWS siting standards out of the USHCN by generally excluding many sites in large urban areas. For example, weather-monitoring stations located in large urban areas may be too close to extensive paved surfaces or obstructions to meet specific NWS siting standards. Nevertheless, individual stations were excluded because they were located in a large urban area, not because they did or did not meet a specific NWS siting standard. Similarly, NCDC officials told us that many stations with the longest records were not selected because NCDC considered the temperature records for these stations to have been affected by the stations’ location in or adjacent to large urban areas. Nevertheless, the officials told us, NCDC made exceptions and selected some stations in or near large urban areas. According to NCDC’s 1987 report on its initial designation of the USHCN, 70 percent of the selected stations were located in areas with populations of less than 10,000 in the 1980 census, and 90 percent were located in areas with populations of less than 50,000. According to our survey of NWS weather forecast offices, close to half of USHCN stations do not adhere to one or more siting standards. Weather forecast offices cited a variety of factors that contributed to stations not adhering to siting standards, such as the use of temperature-measuring equipment that limits NWS’s ability to locate stations so that they adhere to the standards. With regard to management requirements for USHCN stations, we found that the weather forecast offices generally but not always met requirements to conduct annual inspections and update station records. The survey responses we received from weather forecast offices that manage stations included in the USHCN indicate that about 42 percent of the active stations in 2010 did not adhere to one or more of the NWS siting standards for air temperature measurement. This percentage is slightly higher than the percentage not meeting the standards in the larger set of Cooperative Observer Program stations in the contiguous 48 states, of which the USHCN is a part. Specifically, according to our survey responses, about 37 percent of the active Cooperative Observer Program stations in 2010 did not adhere to one or more of the standards. The two standards most commonly cited by weather forecast offices as unmet by USHCN stations were distance to obstructions, such as buildings and trees, and distance to extensive concrete or paved surfaces (see fig. 3). According to weather forecast offices’ survey responses, only a small fraction of the stations did not adhere to the other siting standards, including that temperature-measuring instruments be mounted 4 to 6 feet off the ground. In particular, according to our survey responses, only five active USHCN stations (less than 1 percent) were located on a rooftop. We also visited a nonprobability sample of 8 weather forecast offices and 19 stations in the USHCN that are managed by these offices. During these visits, we observed stations that were located closer to obstructions or to extensive concrete or paved surfaces than specified in the siting standards, although the degree to which the stations did not adhere to the standards varied. For example, figure 4 shows 2 stations that did not meet the siting standards. One station was located too close to a building and trees at a wildlife preserve in an otherwise relatively undeveloped area, but the other station was located in a relatively urban area and surrounded by a parking lot, building, and street. The two factors most commonly cited by NWS weather forecast offices responding to our survey as contributing to USHCN stations not adhering to one or more of the siting standards were (1) NWS’s preference for locating stations at sites that provide a high degree of station stability and data continuity, even if these sites do not adhere to standards, and (2) the use of temperature-measuring equipment that limits NWS’s ability to locate stations so that they adhere to the standards (see fig. 5):  Preference for station stability and data continuity. In our survey of weather forecast offices, the most commonly cited factor contributing to USHCN stations not meeting the siting standards was a preference for locating stations at sites that provide stability and continuity of data. For example, officials in the Tampa weather forecast office told us that one USHCN station that was located in a downtown area and did not meet siting standards has a temperature record that begins before 1895, the first year of data used in the USHCN. They said they could either keep this station open or close it, since there were no other options that met the standards either on the current observer’s property or in the surrounding area. They chose to keep the station open because of its long temperature record.  Limitations due to temperature-measuring equipment. The use of temperature-measuring equipment that is connected by a cable to an indoor readout device can require installing equipment closer to buildings than specified in the standards, according to our survey. Weather forecast office staff must dig trenches for the cables, and paved surfaces such as sidewalks and driveways, as well as the cost of cable for trenching, can limit the length of trenches and consequently the ability to locate stations so that they adhere to the siting standards. According to data from NCDC, about three-quarters of stations in the USHCN use such equipment. NWS headquarters officials told us they hope to replace cabled temperature-measuring equipment with new wireless equipment that can more easily be located in accordance with siting standards. Specifically, the NWS headquarters office with overall responsibility for the Cooperative Observer Program has developed a draft plan for the program that envisions replacing current equipment with wireless equipment. The draft plan does not specify the number of stations where equipment will be replaced but rather calls for evaluating weather- monitoring stations to determine if they meet the siting standards, identifying candidate stations for installing wireless equipment or relocating them to meet siting standards, and identifying stations that are candidates for being closed. NWS officials said, however, that the agency has not yet approved the plan for implementation. We did not specifically ask about wireless equipment in our survey, but 35 weather forecast offices entered comments expressing support for replacing the temperature-measuring equipment currently used at weather-monitoring stations with wireless equipment. Twenty of the offices specifically cited the ability to improve station siting as the reason for making this change. Comments entered by weather forecast offices on our survey, as well as the draft plan, also cited greater ease of installation and maintenance as additional benefits of wireless equipment. For example, installing wireless equipment would not require digging a trench for a cable. Even if NWS approves the draft plan, the use of wireless equipment may not address all siting issues. First, according to NWS officials, commercially available wireless equipment has not yet been developed that meets NWS standards for temperature observations at a cost that is feasible for use at weather-monitoring stations nationwide. The NWS official in charge of monitoring the development of wireless equipment said, however, that such equipment would most likely be available within 5 years. Second, the use of wireless equipment, when available, would not allow NWS to improve siting at all stations that do not currently meet the standards. For example, the observers’ properties at some of the stations we visited were too small to allow any temperature-measuring equipment to be placed far enough from buildings or other obstructions to meet the siting standards, regardless of whether the equipment was wireless or cabled. NWS officials acknowledged that the use of wireless equipment would improve station siting but not eliminate all stations that currently do not meeting siting standards. NWS officials also described a wide range of other factors contributing to stations not adhering to siting standards. These include the difficulty of recruiting new volunteer observers at sites that meet standards, particularly as the nation’s population has become more mobile and thus less apt to serve as long-term observers;  properties that may be too small or have trees or other features that make it difficult to locate instruments as far from obstructions as the standards specify; the reluctance of observers to allow equipment sited in a location on their property that would meet the standards; changes to the observer’s property (e.g., growth of trees) or urbanization of the surrounding area that can cause the stations to not meet standards; and  natural geographic features in certain areas, such as heavily forested or mountainous terrain, that can hamper the ability to meet the standards. Our review of files for USHCN stations at 8 weather forecast offices, as well as our survey results, show that the offices have generally but not always met the requirement to annually inspect stations to maintain temperature-measuring equipment and determine if changes have occurred requiring station records to be updated, such as changes to siting conditions. Our file reviews also show that the offices generally but not always met the requirement to periodically update station records, even if no changes had taken place at a station. According to NCDC and NWS officials, it is important to annually visit stations and keep station records up to date so that users of the stations’ temperature records, such as NCDC, know the conditions under which the observations were recorded. Any information NCDC has about these conditions, according to agency officials, can be used in conjunction with its statistical methods to identify significant shifts in a station’s temperature data that are unrelated to actual warming or cooling trends and to adjust the data to remove such shifts.  Annual inspections. The results of our survey indicate that in 2010, 102 of 114 weather forecast offices met the annual inspection requirement for stations in the USHCN. According to our survey results, 12 offices did not meet the requirement at a total of 35 stations. In reviewing files at the 8 weather forecast offices we visited, we also found instances where the annual inspection requirement was not met in 2008 and 2009. Specifically, the results of our file reviews show that 3 of the 8 offices did not meet the annual inspection requirement for five stations in 2008, and 1 office did not meet the requirement for one station in 2009. In contrast, for the stations where the requirement had been met, the weather forecast offices had frequently conducted multiple inspections during a year. For example, office staff may have visited a station multiple times to repair equipment, to temporarily relocate temperature-measuring instruments to allow for construction at the observer’s property, or to meet the requirement for semiannual inspections of stations that also record precipitation.  Station record updates. Until 2005, NWS required that station records be updated at least once every 10 years. At that time, NWS changed the requirement to once every 5 years. In reviewing files at the 8 weather forecast offices we visited, we found that two of the 8 offices had consistently met the requirement to update station records within 5 years. In contrast, at two of the other offices, the time between updates for four stations was over 10 years. At the remaining four offices, the time between updates for one or more stations was over 5 years but less than 10 years. For example, one office did not update a record dated February 2002 until January 2011—almost 9 years after the previous update. When the weather forecast offices updated records, the types of changes they documented included those that can cause shifts in temperature data unrelated to any actual temperature change, including replacement or relocation of temperature-measuring equipment, changes in time of observation, and descriptions of obstructions. Through our survey and visits to 8 weather forecast offices, weather forecast office officials identified a number of challenges to their ability to ensure that station records are updated and to carrying out other responsibilities for managing stations in the Cooperative Observer Program, including those in the USHCN. In our survey, the most frequently cited challenge was that weather forecast offices rely on staff assigned to manage the stations to also assist with other office responsibilities. Competing mission requirements at the offices was a closely related and often-cited challenge. For example, weather forecast offices operate 24 hours a day, and office officials explained that staff assigned to manage the stations may also be expected to work shifts, which limits the time they can visit the stations. Some weather forecast offices we visited told us that turnover and reductions in the number of staff assigned to the Cooperative Observer Program results in the loss of institutional knowledge needed to manage weather-monitoring stations. Weather forecast offices, particularly those with large areas to cover, also identified long driving distances to stations as a challenge. For example, officials at one office we visited told us that completing the required annual station visits requires driving 17,000 miles per year; that the round-trip drive to some stations takes longer than 10 hours, leaving limited time to maintain equipment or install equipment at new stations; and that during the winter, some stations are inaccessible. NWS does not use its information systems to centrally track whether USHCN stations adhere to siting standards or if weather forecast offices are meeting the requirement to update station records at least once every 5 years. NWS also does not have an agencywide policy on what actions to take at stations that do not adhere to siting standards, which creates the potential for inconsistency in how weather forecast offices address such stations. The lack of centralized electronic tracking of performance information for the USHCN and the lack of an agencywide policy on the actions to take at stations that do not meet siting standards limit NWS’s ability to manage the USHCN in accordance with performance management guidelines and federal internal control standards. NWS’s siting standards for weather-monitoring stations and the requirement that station records be updated at least once every 5 years in effect establish goals for each weather forecast office to meet. NWS does not, however, centrally capture data on the extent to which stations in the Cooperative Observer Program and the USHCN meet its siting standards and station record update requirement. NWS has an information system it uses to help manage weather-monitoring stations, but the system has several limitations. The information system allows the agency to record basic identifying information about the stations and the conditions under which volunteer observers record weather observations, including some information about siting conditions and the dates of station record updates. According to NWS officials, however, the agency did not design the system to centrally track adherence to siting standards or the requirement to update station records at least once every 5 years. As a result, NWS is limited in its ability to use its information system to track USHCN performance information. The information system NWS uses to track information related to Cooperative Observer Program stations, including those selected for the USHCN, has the following specific limitations:  Limited information on adherence to station siting standards. According to NWS headquarters officials, they cannot query their information system to identify the specific weather-monitoring stations that do not meet siting standards or the total number of stations that do not meet standards. For example, in 2009, NCDC requested that NWS verify siting conditions at USHCN stations for a study evaluating the effects of siting conditions on temperature trends. NWS headquarters did not have a way to easily gather this information and instead had to direct its regional offices to have weather forecast office staff verify siting conditions at the stations. According to NWS officials, weather forecast office staff did not make special visits to the stations to gather the information requested by NCDC, as these stations might not have been on their immediate visit schedule, but reviewed files instead. Incomplete information on siting conditions. NWS guidance for using the information system directs weather forecast offices to enter descriptions of obstructions at weather-monitoring stations. We found that in some cases, however, such descriptions may not accurately reflect whether temperature-measuring equipment meets the siting standard related to obstructions. This discrepancy can arise because NWS guidance directs weather forecast offices to describe obstructions in relation to a station’s rain gauge. We found that obstructions to temperature measuring-equipment can differ from those to the rain gauge, depending where the two instruments are located. For example, the record for one USHCN station we visited did not include any obstructions because the rain gauge was on a rooftop where there were no obstructions. The station’s temperature- measuring equipment, however, was at ground level and surrounded by buildings on three sides and, as a result, did not meet NWS’s standards for siting temperature sensors. (The station’s temperature- measuring equipment can be seen in fig. 6.) In addition, NWS guidance for using the information system does not direct weather forecast offices to enter descriptions of other conditions that might indicate whether specific siting standards are being met, such as proximity to extensive concrete or paved surfaces.  Limited ability to track whether the requirement to update station records is being met. The NWS information system is designed to allow weather forecast office staff to enter the date each station record is updated and to store previous versions of station records. NWS officials stated, however, that the information system is not set up to centrally track the performance of weather forecast offices in updating station records within the required 5-year time frame. In addition, the system is not set up to allow NWS to notify weather forecast offices when station records are nearing the 5-year mark and need to be updated. NWS regional offices and weather forecast offices are instead responsible for tracking the status of updates. For example, an official at one office we visited told us that he sometimes forgets to update station records within the 5-year time frame but that the NWS regional headquarters keeps track of the requirement. Inconsistent identification of stations included in the USHCN. The NWS information system includes information about all stations in the Cooperative Observer Program, including those designated by NCDC as part of USHCN, but NWS does not consistently use the system to identify USHCN stations. Specifically, station records allow weather forecast offices to indicate whether stations are part of the USHCN, but offices had done so in only some of the station records we reviewed. As a result, NWS headquarters officials cannot use their information system to determine which stations NCDC has designated as part of the USHCN. Officials at some of the weather forecast offices we visited were also unsure which of the stations they manage had been designated as part of the USHCN. According to NCDC officials, it is important that weather forecast offices have the ability to determine which stations belong to the USHCN so that they can set appropriate priorities for the maintenance, repair, and replacement of temperature-measuring equipment at these stations. Our work related to the Government Performance and Results Act of 1993 and the experience of leading organizations have shown the importance of developing program performance goals that identify desired results of program activities and reliable information that can be used to assess results. NWS headquarters officials we spoke with acknowledged the need to centrally track performance information related to the management of Cooperative Observer Program stations, which includes those selected for the USHCN. The officials said they only recently began tracking the requirement that weather forecast offices inspect stations at least once annually. According to these officials, they selected annual station inspections as a performance indicator because the inspection requirement is easy to track using the current information system for managing the stations. NWS provided us with summary data on inspections for all stations in the Cooperative Observer Program, including USHCN stations. The data show that the percentage of stations for which weather forecast offices met the annual inspection requirement increased from 70 percent in 2005 to 80 percent in 2010. NWS headquarters officials also told us that the agency has begun upgrading its current information system and that they are considering options to notify weather forecast offices when updates of station records are overdue and to better track adherence to siting conditions. The officials said they hope to complete the upgrade by the end of fiscal year 2013, depending on the availability of funding. Options being considered for tracking siting conditions include photographs of stations and the use of a rating scale to summarize the extent to which stations adhere to siting standards, similar to the rating scale created for the newer networks developed specifically for climate monitoring (see app. II for further details). According to our survey results, 63 percent of weather forecast offices believe that the use of photographs in the NWS information system would be either very helpful or extremely helpful in evaluating stations’ adherence to siting standards. In addition, 52 percent of the offices responded that the option to check a box in a station’s electronic record to indicate that it does not adhere to the standards would be either very helpful or extremely helpful. Some offices also suggested other tools they would consider helpful in evaluating siting conditions at stations, such as the use of commercially available satellite imagery and maps. NWS does not have an agencywide policy for stations not adhering to siting standards that clarifies for staff in weather forecast offices whether the stations should be closed, relocated, or maintained in their present condition to preserve the continuity of their temperature records. Standards for internal control in the federal government call for federal agencies to document their policies and procedures to help managers achieve desired results. Without an agencywide policy, weather forecast offices do not have a basis for making consistent decisions about what actions to take at USHCN stations that do not adhere to siting standards. NWS headquarters officials we spoke with acknowledged that they had not developed an agencywide policy on the actions, if any, that weather forecast offices should take to address stations that do not adhere to siting standards. They said they recognized the need to develop an agencywide policy and that, in the absence of such a policy, decisions on how to address stations that do not meet siting standards are up to individual weather forecast offices. For example, they said weather forecast offices might consult with NCDC or state climatologists when deciding whether to close stations that do not meet siting standards, but that such outreach is not required. In the absence of an agencywide policy, the NWS western regional office directed weather forecast offices in the region to stop submitting data to NCDC from stations with “egregious” siting conditions in a format that would allow NCDC to use these data when analyzing long-term temperature trends. The meaning of “egregious” was not defined and was left to each weather forecast office to interpret. According to NWS officials, none of the other three regional offices in the contiguous United States have developed a similar policy. According to the western regional manager responsible for the Cooperative Observer Program, the region’s policy affected about a dozen stations, which had not been designated as part of the USHCN. Through our visits to weather forecast offices, however, we found that, even in the western region, the offices did not consistently implement the region’s policy. In particular, an official at one of the two offices we visited in the region told us he did not follow the policy because doing so would have affected the majority of stations in his state. NCDC officials told us that, as NWS develops a policy regarding how to address stations in the Cooperative Observer Program (including those designated as part of the USHCN) that do not meet siting standards, it should consider how NCDC uses the temperature data from the stations. Because the data from USHCN stations are used to identify long-term climate trends and the stations were thus selected in part on the basis of the stations’ stability of measurement conditions and continuity of data, NCDC officials said they would caution NWS against relocating or closing stations that do not meet siting standards. NCDC officials said they would consider closing a station only in certain situations, such as an observer not following NWS guidelines when recording weather observations. Given the importance of data from the USHCN in monitoring and in formulating public policy related to climate change, it is important that the public and policymakers have confidence that the network is being managed effectively. NWS has developed station siting standards and management requirements for the USHCN, and performance management guidelines dictate that NWS should gather data on the extent to which these standards are being met. But NWS’s information system does not centrally capture such information. As a result, the agency cannot easily measure the USHCN’s performance against its siting standards and management requirements. Without more complete data on siting conditions, including when siting conditions change, it is difficult for agency management to assess the extent to which the stations meet its siting standards. Similarly, NWS does not have easily accessible data on when station records were last updated for monitoring whether the records are being updated at least once every 5 years as the agency requires. In addition, although federal internal control standards call for agencies to develop policies to maintain control over program activities, NWS has not established agencywide policy for what to do when, over years and decades, stations no longer adhere to its siting standards because conditions have changed. In the absence of such policy, it is not clear to weather forecast office officials whether stations that do not adhere to siting standards should remain open because data continuity is important for analyzing long-term climate trends, or whether the stations should be moved or closed. As a result, without a policy with actions for all offices to follow, weather forecast offices may be taking different approaches to address stations that do not meet siting standards. To improve NWS’s ability to manage the USHCN in accordance with performance management guidelines and federal internal control standards, as well as to strengthen congressional and public confidence in the data the network provides, we recommend that the Acting Secretary of Commerce direct the Administrator of NOAA to take the following two actions:  Enhance NWS’s information system to centrally capture information that would be useful in managing stations in the USHCN, including (1) more complete data on siting conditions (including when siting conditions change), which would allow the agency to assess the extent to which the stations meet its siting standards, and (2) existing data on when station records were last updated to monitor whether the records are being updated at least once every 5 years as NWS requires.  Develop an NWS agencywide policy, in consultation with NCDC, on the actions weather forecast offices should take to address stations that do not meet siting standards. We provided a copy of our draft report to the Department of Commerce for review and comment. In written comments from the department, NOAA agreed that it can improve its ability to manage the USHCN in accordance with performance management guidelines and federal internal control standards. NOAA also agreed with our two recommendations. Regarding our first recommendation, NOAA stated that NWS has begun the planning process to upgrade the existing information system that captures data for managing Cooperative Observer Program stations, including those that are a part of the USHCN. According to NOAA, the upgrade will include the ability to capture more complete data on siting conditions and to determine if a station’s record has been updated in the last 5 years. Regarding the second recommendation, NOAA said that NWS will work with NCDC to develop a policy to assist weather forecast offices in taking action on stations that do not meet siting standards. NOAA also stated that it understood that, given the scope of our review, we did not assess the effect of stations not meeting siting standards on the reliability of the agency’s analysis of temperature trends. Nevertheless, NOAA added that it was important for our findings to include a discussion of the published peer-reviewed studies that have explicitly examined the USHCN’s data quality and its effects on the reliability of NOAA’s temperature trend data. We did not include such a discussion in our report because this issue was outside the scope of our work. We did, however, reproduce NOAA’s list of relevant studies on this topic together with its comments. NOAA also provided technical comments, which we incorporated into the report as appropriate. NOAA’s comments are reproduced in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Secretary of Commerce, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine how the National Oceanic and Atmospheric Administration (NOAA) selected stations for the U.S. Historical Climatology Network (USHCN) from the larger set of existing stations in the Cooperative Observer Program network, we reviewed documents from NOAA’s National Climatic Data Center (NCDC) describing the selection process, interviewed NCDC officials, and analyzed NCDC data on stations in the USHCN. Specifically, we reviewed documents written by NCDC officials to identify the factors the agency considered important for monitoring long-term temperature trends, and we interviewed NCDC officials regarding these factors, as well as how they applied them in the selection process. We also obtained data from NCDC on the geographic distribution of USHCN stations across the contiguous 48 states and the length and completeness of the stations’ temperature records. The data we obtained and analyzed came from NCDC’s version 2 of the USHCN. We assessed the reliability of the data by electronically testing them and comparing selected samples to data from other NOAA sources to check for obvious errors in accuracy and completeness. We also reviewed information about the data and the systems used by NCDC to produce the data, interviewed NCDC officials knowledgeable about the data, and worked with officials to clarify inconsistencies before using the data in our analyses. We determined that the data were sufficiently reliable for reporting on the extent to which USHCN stations met the factors NCDC considered important in selecting the stations. To examine the extent to which USHCN stations meet National Weather Service (NWS) siting standards and management requirements for weather-monitoring stations, we developed and administered a survey of meteorologists-in-charge at the 116 NWS weather forecast offices responsible for managing stations in the network. Our questionnaire included questions about adherence to siting standards, reasons for stations not adhering to the standards, and general management challenges. The survey was Web based and accessible through a secure server. On February 7, 2011, we sent an e-mail notification to the 116 meteorologists-in-charge describing the survey and notifying them that it would be activated on the Internet shortly. On February 9, 2011, we formally activated the survey and sent another e-mail containing a link to the survey along with each respondent’s unique username and password. We sent follow-up e-mail messages on February 16, 2011, and February 24, 2011, to those who had not yet responded. Then, starting on March 11, 2011, we contacted the remaining nonrespondents by telephone or e-mail. The questionnaire was available online until March 23, 2011. By that date, the surveys were completed by all 116 weather forecast offices, for a response rate of 100 percent. Because our survey covered all weather forecast offices in the contiguous 48 states, not a sample of them, it was not subject to sampling error. Surveys are, however, subject to nonsampling errors. For example, how a particular question is interpreted, sources of information available to respondents, and how the data are entered in a database or are analyzed can introduce unwanted variability into survey results. We took steps in developing the survey, collecting the data, and analyzing them to minimize such nonsampling errors. For example, GAO survey specialists designed the questionnaire in collaboration with GAO staff who had subject-matter expertise. In addition, we conducted four pretests of the draft questionnaire to ensure that the questions were clear and unambiguous, terminology was used correctly, the questionnaire did not place an undue burden on agency officials, the information could feasibly be obtained, and the survey was comprehensive and unbiased. We conducted each of the four pretests over the telephone with one or more NWS officials from each of the four NWS regions in the contiguous United States. On the basis of the feedback we received, we made changes to the content and format of the survey after each of the four pretests. When we analyzed the data, an independent GAO analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic survey, eliminating the need to key data into a database, thus minimizing data entry errors. Our survey results are also subject to errors made by weather forecast office staff regarding the number of stations they reported as not adhering to siting standards. For example, one weather forecast office indicated in its survey response that one USHCN station in its area was on a rooftop, but office staff later told us that the survey response was wrong and that none of its USHCN stations is located on a rooftop. In addition, the number of stations reported by weather forecast offices as not adhering to siting standards is subject to the staff’s interpretation of NWS’s siting standards. For example, it is a matter of interpretation and judgment by NWS staff whether objects surrounding a station, such as trees or structures, are considered to be obstructions and thus whether a station is considered to meet or not meet the siting standards. The response to our survey from one weather forecast office we visited indicated that the temperature-measuring equipment at only one of its USHCN stations was closer to an obstruction than specified in the siting standards. We observed obstructions at all three stations we visited, however, and the records for these stations also listed obstructions. In calculating the percentages of the total number of active stations that met the specific criteria we asked about in our survey questions, we did not use responses from weather forecast offices that provided incomplete or inconsistent information. Depending on the percentage being calculated, we did not use responses from at most 6 of the 116 offices. For example, we did not use responses from 3 offices when calculating the percentage of active USHCN stations that did not adhere to one or more of the siting standards, and we did not use responses from 5 offices when calculating the percentage of active Cooperative Observer Program stations not adhering to one or more of the standards. To examine in greater depth the extent to which USHCN stations meet siting standards and management requirements for weather-monitoring stations, we visited a nonprobability sample of 8 NWS weather forecast offices. To ensure geographic distribution in the weather forecast offices we visited, we selected two offices in each of the four NWS regions in the contiguous United States, and we selected the specific offices we visited to ensure a range of sizes in terms of the offices’ forecast areas. We also selected offices with differing structures for supervising staff assigned to manage the Cooperative Observer Program and with a range of other programs the offices are responsible for, such as marine forecasts. To examine adherence to siting standards for weather-monitoring stations, we observed siting conditions at a nonprobability sample of 19 USHCN stations. We selected stations to visit to ensure variety in the type of temperature-measuring equipment used at the stations and other factors that could affect siting conditions. During the station visits, we developed and used a checklist that tracked how well the conditions of the site met what was recorded in the stations’ records. In addition, we reviewed files on USHCN stations at each of the weather forecast offices we visited. We reviewed records and annual inspection reports from a total of 81 USHCN stations. We entered information from the records and inspection reports into a database to capture information on the extent to which weather forecast offices adhered to the requirements to update station records and conduct annual inspections, among other things. Because we used a nonprobability sample to select weather forecast offices and USHCN stations to visit, the information we obtained from these visits cannot be generalized to other weather forecast offices or USHCN stations. The visits instead provided us with information on the perspectives of various participants in the weather forecast offices about managing weather- monitoring stations and examples of station siting conditions. To gather additional information on the extent to which USHCN stations meet NWS siting standards for weather-monitoring stations, we also reviewed academic literature that addressed issues and concerns related to siting of weather-monitoring stations, including those in the USHCN. We also reviewed NWS’s policy directives related to station siting and interviewed officials in NWS headquarters, regional offices, and weather forecast offices who were responsible for managing the Cooperative Observer Program. In addition, we interviewed the person at the NWS training center responsible for training NWS staff on management and operation of the Cooperative Observer Program and individuals who have raised concerns about the extent to which stations in the USHCN are meeting siting standards. To evaluate the extent to which NWS tracks USHCN stations’ adherence to siting standards and management requirements and has established a policy for addressing stations that do not adhere to siting standards, we took several actions. In particular, we evaluated the types of data that are captured in NWS’s information system for managing weather-monitoring stations—the Cooperative Station Service Accountability system. We also interviewed NWS officials responsible for managing the USHCN, as well as NCDC officials; reviewed NWS policy directives, briefings, and memorandums related to managing the network; and examined data on the extent to which NWS conducted the required annual inspections of weather-monitoring stations. To determine the extent to which NWS has established a policy to address stations that do not adhere to siting standards, we reviewed NWS documents, such as agency directives, memos, briefings on the future of the Cooperative Observer Program, and an executive summary for a draft strategic plan for the program. In addition, we interviewed NCDC officials and NWS officials from regional, headquarters, and weather forecast offices and from the Cooperative Observer Program training center. Since 2001, NOAA has supported establishment of two new networks of climate monitoring stations. The first to be established, the U.S. Climate Reference Network, is intended to detect indications of climate change at a national scale. This network consists of 114 stations in the contiguous 48 states, and NOAA has plans to expand the network to include stations in Alaska. The purpose of the second new network, the U.S. Regional Climate Reference Network, is to detect indications of climate change at a regional rather than national scale. As of July 2011, this network consisted of 63 stations in the southwestern United States, and NOAA hopes to complete the installation of stations for the network across the contiguous 48 states by about 2020, depending on the availability of funding. According to NOAA, once both networks are fully established, about 538 locations in the contiguous United States will have either a U.S. Climate Reference Network station or a U.S. Regional Climate Reference Network station. The station-siting standards for the two new networks have similarities to the siting standards established for the Cooperative Observer Program and that also apply to the USHCN, such as installation over level terrain and not on rooftops. The new networks also have differences, including instances where the siting standards for the new networks are more stringent. For example, the standards for the U.S. Climate Reference Network call for stations to be located farther from concrete or paved surfaces than specified in the standards for stations in the USHCN. The differences reflect the fact that, whereas NCDC designated stations for the USHCN from an existing network of NWS weather-monitoring stations, NOAA has specifically located and designed stations in the newer networks for monitoring the nation’s climate. For example, the use of automated equipment, as well as solar power at many stations, allows for greater flexibility in locating stations in comparison with NWS weather- monitoring stations that rely on volunteer observers and equipment connected by a cable to an indoor readout device. Similarly, to the extent possible, NOAA has placed a priority on locating stations for the new networks on public lands, such as national and state parks. According to NOAA, in comparison with the properties of volunteer observers, such locations have a higher probability of continuing in their present condition without major changes for long periods of time (50 to 100 years). Figure 7 depicts a station in the U.S. Climate Reference Network. NOAA has also established a process to evaluate and select potential sites for stations in the new networks. For example, selecting stations for the U.S. Regional Climate Reference Network includes field and desk surveys, which involve collecting information about the site’s condition from photographs and other sources; evaluation of the surveys by a site selection panel; a vote among panel members to decide among candidate locations; and final approval or disapproval of panel recommendations by a site selection lead. See table 1 for differences in the siting standards applied to the USHCN and the new climate networks. In addition to the contact named above, Stephen D. Secrist (Assistant Director), Richard Bulman, William Carrigg, Joanna Chan, Ellen W. Chu, Joseph Cook, Alysia Davis, N’Kenge Gibson, Stuart Kaufman, Cheryl Peterson, Anne Rhodes-Kline, and Jerome Sandau made key contributions to this report.
The National Oceanic and Atmospheric Administration (NOAA) maintains a network of weather-monitoring stations known as the U.S. Historical Climatology Network (USHCN), which monitors the nation's climate and analyzes long-term surface temperature trends. Recent reports have shown that some stations in the USHCN are not sited in accordance with NOAA's standards, which state that temperature instruments should be located away from extensive paved surfaces or obstructions such as buildings and trees. GAO was asked to examine (1) how NOAA chose stations for the USHCN, (2) the extent to which these stations meet siting standards and other requirements, and (3) the extent to which NOAA tracks USHCN stations' adherence to siting standards and other requirements and has established a policy for addressing nonadherence to siting standards. GAO reviewed data and documents, interviewed key NOAA officials, surveyed the 116 NOAA weather forecast offices responsible for managing stations in the USHCN, and visited 8 forecast offices. In choosing USHCN stations from a larger set of existing weather-monitoring stations, NOAA placed a high priority on achieving a relatively uniform geographic distribution of stations across the contiguous 48 states. NOAA balanced geographic distribution with other factors, including a desire for a long history of temperature records, limited periods of missing data, and stability of a station's location and other measurement conditions, since changes in such conditions can cause temperature shifts unrelated to climate trends. NOAA had to make certain exceptions, such as including many stations that had incomplete temperature records. In general, the extent to which the stations met NOAA's siting standards played a limited role in the designation process, in part because NOAA officials considered other factors, such as geographic distribution and a long history of records, to be more important. USHCN stations meet NOAA's siting standards and management requirements to varying degrees. According to GAO's survey of weather forecast offices, about 42 percent of the active stations in 2010 did not meet one or more of the siting standards. With regard to management requirements, GAO found that the weather forecast offices had generally but not always met the requirements to conduct annual station inspections and to update station records. NOAA officials told GAO that it is important to annually visit stations and keep records up to date, including siting conditions, so that NOAA and other users of the data know the conditions under which they were recorded. NOAA officials identified a variety of challenges that contribute to some stations not adhering to siting standards and management requirements, including the use of temperature-measuring equipment that is connected by a cable to an indoor readout device--which can require installing equipment closer to buildings than specified in the siting standards. NOAA does not centrally track whether USHCN stations adhere to siting standards and the requirement to update station records, and it does not have an agencywide policy regarding stations that do not meet its siting standards. Performance management guidelines call for using performance information to assess program results. NOAA's information systems, however, are not designed to centrally track whether stations in the USHCN meet its siting standards or the requirement to update station records. Without centrally available information, NOAA cannot easily measure the performance of the USHCN in meeting siting standards and management requirements. Furthermore, federal internal control standards call for agencies to document their policies and procedures to help managers achieve desired results. NOAA has not developed an agencywide policy, however, that clarifies for agency staff whether stations that do not adhere to siting standards should remain open because the continuity of the data is important, or should be moved or closed. As a result, weather forecast offices do not have a basis for making consistent decisions to address stations that do not meet the siting standards. GAO recommends that NOAA enhance its information systems to centrally capture information useful in managing the USHCN and develop a policy on how to address stations that do not meet its siting standards. NOAA agreed with GAO's recommendations.
Although numerous U.S. agencies are engaged in U.S. efforts to provide assistance to foreign police forces, DOD and State are the major providers—providing police training around the world through a variety of authorities. DOD trains and equips foreign police forces to support its counterinsurgency operations. It also provides support for the counterdrug activities of foreign law enforcement agencies for purposes including counterdrug training of foreign law enforcement personnel. DOD provides such assistance around the world through a variety of authorities. For example, section 1004 of the NDAA for Fiscal Year 1991, as amended, authorizes DOD to provide support for the counterdrug activities of foreign law-enforcement agencies for purposes including counterdrug training of foreign law-enforcement personnel, if requested by an appropriate official of a federal agency with counterdrug responsibilities. State trains and equips foreign police to support a variety of U.S. foreign policy objectives, including suppressing international narcotics trafficking, combating terrorism, and developing and implementing U.S. policies to curb the proliferation of all types of weapons of mass destruction. Different State bureaus carry out police assistance under different authorities. For example, according to State/INL officials, State/INL carries out its mission under authorities in Chapter 8 of the Foreign Assistance Act, as amended, which among other things, authorizes the provision of law-enforcement training. DOE provides training and equipment to overseas law enforcement, both at national borders and to police and security forces, as part of the mission of its National Nuclear Security Administration’s Second Line of Defense program to strengthen the capability of foreign governments to deter, detect and interdict illicit trafficking in nuclear and other radioactive materials across international borders, through the global maritime shipping system, and by equipping teams to be deployed throughout their countries. USAID provides community-based police assistance as part of its role in promoting the rule of law through assistance to the justice sector. Treasury provides training as part of its mission to support the development of strong financial sectors and sound financial management overseas. DOJ and DHS implement foreign police assistance activities primarily funded by State. Treasury also receives some funds from State. Pub. Law No. 87-195, as amended. Investigation (FBI) Academy in Virginia, and at various DOD training facilities. Trainers provided by various U.S. agencies also travel overseas to provide instruction. Foreign law enforcement personnel are also trained at State-funded international law enforcement academies located in El Salvador, Thailand, Hungary, Botswana, and Peru. The training covers a variety of subject matter, including crime scene investigation, postblast investigations, forensics, and behavioral analysis. We estimate the U.S. government made available $13.9 billion for foreign police assistance during fiscal years 2009 through 2011. Most U.S. funding made available for foreign police assistance during fiscal years 2009 through 2011 provided training and equipment to Afghanistan, Iraq, Pakistan, Colombia, Mexico, and the Palestinian Territories. DOD and State funds constituted about 97 percent of the U.S. funds for police assistance in fiscal year 2009 and 98 percent of U.S. funds for police assistance in fiscal years 2010 and 2011. Four other agencies provided the remaining amount. On the basis of data provided by DOD, State, DOE, USAID, Treasury, and DOJ, we estimate that the U.S. government made available $3.5 billion in foreign police assistance in fiscal year 2009, $5.7 billion in fiscal year 2010, and $4.7 billion in fiscal year 2011 (see fig. 1).made available focused on sustaining the counternarcotics, counterterrorism, anticrime, and other civilian policing efforts of police forces around the world. GAO, Combating Terrorism: Pakistan Counterinsurgency Funds Disbursed, but Human Rights Vetting Process Can Be Enhanced, GAO-11-860SU (Washington, D.C.: Sept. 11, 2011). million in fiscal year 2011. State’s activities included support to the Colombian National Police’s aviation program and training on weapons and other equipment to rural police units. DOD support included training for a special unit of the Colombian National Police. In Mexico, DOD and State funds made available decreased from an estimated $167 million in fiscal year 2010 to $21 million in fiscal year 2011. Activities in Mexico included State’s Mérida Initiative, which provided training and equipment including aircraft and boats, inspection equipment, and canine units. DOD support to Mexico included training on aviation, communications equipment, maintenance, and information sharing. For the Palestinian Territories, State’s funds made available increased from an estimated $97 million in fiscal year 2010 to $142 million in fiscal year 2011. State provided battalion-level basic law enforcement and security training conducted at the Jordanian International Police Training Center located outside Amman, Jordan. Appendix IV contains additional information on activities in Afghanistan, Iraq, Pakistan, Colombia, Mexico, and the Palestinian Territories. Four other agencies—DOE, USAID, Treasury, and DOJ—also made available about $83 million, or 2 percent of the estimated funds, for foreign police assistance in fiscal year 2011 (see table 1). DOE made available the majority of the funds ($52 million) for its nuclear security programs; USAID, Treasury, and DOJ made available the remaining amounts. DOD and State/INL have acknowledged limitations in their procedures to assess and evaluate their foreign police assistance activities and are taking steps to address them. DOD assesses the performance of the national police forces it has trained and equipped for counterinsurgency operations in Afghanistan, Iraq, and Pakistan—countries that were the three largest recipients of DOD’s foreign police assistance funds during fiscal years 2009 through 2011. However, according to an October 2011 DOD report to Congress, the assessment process for Afghanistan does not provide data on civil policing operations such as referring cases to the justice system, a fact that hampers the department’s ability to fully assess the effectiveness of the training it provides to the ANP. DOD plans to begin collecting these data to assess civil policing effectiveness. As of April 2012, State/INL had conducted only one evaluation of a program that includes foreign police assistance activities. Recognizing the need to conduct such evaluations, State/INL is developing an evaluation plan that is consistent with State’s February 2012 Evaluation Policy and implementing its June 2010 guidelines that recommended including evaluation as a part of its budget and planning documents for programs in Iraq and Mexico. Other priority programs for evaluation include ones for Afghanistan, Colombia, the Palestinian Territories, and Pakistan. DOD assesses the performance of the national police forces it has trained and equipped for counterinsurgency operations in Afghanistan, Iraq, and Pakistan—countries that were the three largest recipients of DOD’s foreign police assistance funds during fiscal years 2009 through 2011. For Afghanistan, DOD has assessed the Afghan National Security Forces, which consists of the Afghan National Army (ANA) and ANP, using its Commander’s Unit Assessment Tool. The assessment tool provides quantitative data for security force units, including personnel, equipment, and training, and qualitative assessments for functions such as training and education. In addition, the assessment tool reports on the operational performance of the ANA and ANP units using rating definition levels. Rating definition levels include (1) independent, (2) effective with advisers, (3) effective with assistance, (4) developing, (5) established, and (6) not assessed. As of August 2011, DOD reported 26 ANP units were rated as independent. We previously reported on U.S. efforts to train and equip the ANP in 2009 and more recently in 2012. For Iraq, DOD used a readiness assessment system to determine when units of the Iraqi security forces, including the Iraqi national police, could assume the lead for conducting security operations. This system’s classified assessments were prepared monthly by the unit’s coalition commander and Iraqi commander. According to multinational force guidance, the purpose of the assessment system was to provide commanders with a method to consistently evaluate units. It also helped to identify factors hindering unit progress, determine resource shortfalls, and make resource allocations. Units were evaluated in the areas of personnel, command and control, equipment, sustainment/logistics, training, leadership, operational effectiveness, and reliability, including how militia and sectarian influences affected the loyalty and reliability of Iraqi police and military forces. Further information on the results of these assessments is classified. For Pakistan, DOD reported that, since March 2009, the Strategic Implementation Plan has been the principal mechanism for monitoring and assessing the administration’s progress in attaining the core Pakistan- related objectives of the President’s Afghanistan-Pakistan strategy, which include developing the counterinsurgency capabilities of Pakistan’s Frontier Corps and army. Although details of these and supporting assessments are classified, DOD reported that a series of events beginning in late 2010 heightened bilateral tension between the United States and Pakistan. Pakistan’s military subsequently requested significant reductions in U.S. military personnel in Pakistan. According to the report, the reduced number of U.S. military personnel and trainers, along with continued delays in obtaining visas, hindered the United States’ provision of security-related assistance to Pakistan. As a result, the progress achieved since 2010 in training, advising, and equipping Pakistan security forces has eroded, particularly in the area of counterinsurgency effectiveness for tactical- and operational-level combat forces. Although DOD is assessing ANP’s operational performance, the department recently reported it lacked data to assess civil policing effectiveness. According to a DOD October 2011 report, DOD uses the same report template to assess ANA’s and ANP’s ability to meet their counterinsurgency mission, but it does not address civil policing and the other roles and responsibilities of ANP. In 2008, we reported that the deterioration of Afghanistan’s security situation since 2005 had led to increased ANP involvement in counterinsurgency operations, resulting in additional training in weapons and survival skills and counterinsurgency tactics. We also reported that ANP’s role is to enforce the rule of law, protect the rights of citizens, maintain civil order and public safety, control national borders, and reduce the level of domestic and international organized crime, among other activities. In its report, DOD acknowledged that transitioning ANP’s role from performing counterinsurgency operations to a community police force that interacts with the population will be challenging, especially in contested areas. DOD reported that it plans to create a separate ANP report template that will include data on law enforcement operations in 2012. According to the DOD report, the ANP report template will provide data on community policing and law enforcement operations (see table 2). For example, DOD plans to include questions in the ANP report template that will assess the extent to which ANP units are recording complaints from the public. In developing the new template, DOD is working with the International Police Coordination Board (IPCB), according to the department’s report. First established in the Afghanistan Compact at the London Conference in 2006, IPCB serves as the main coordination board for police reform in Afghanistan. Upon its establishment, IPCB had 13 member nations, including the United States. To increase DOD’s ability to assess civil policing effectiveness, IPCB has established a partnership with the International Security Assistance Force-Joint Command. According to the DOD report, IPCB is assisting DOD by having law enforcement professionals report data in its report template, and DOD is assisting IPCB by sharing current and historical ANP data. IPCB has also assisted DOD with drafting targeted questions that will be used within the ANP report template to provide data on the ANP units’ ability to conduct law enforcement operations, which we defined earlier. State/INL issued guidelines in June 2010 that recommended conducting evaluations. These guidelines were developed in response to the Secretary of State’s June 2009 directive for systematic evaluation and to promote a culture change among program offices that included support for conducting evaluations, according to State/INL officials. The bureau’s guidelines recommend that State/INL programs have  a defined strategy and written performance management plan that identifies performance measures, including indicators and targets, and establishes an approach for evaluation;  program implementation documents such as letters of agreement, interagency agreements, and contracts that specify State/INL, host country, and implementing partner responsibilities for conducting evaluations; and  budget proposals for programs that identify funding for evaluations as a separate item. State/INL guidelines for monitoring and evaluation also identify the types of evaluations that should be performed and the timing for them based on project length and budget. For example:  Projects shorter than 2 years may focus on output metrics such as the number of trained and equipped law enforcement personnel.  Projects longer than 2 years or greater than $25 million must evaluate outcomes and impacts.  Programs that have a life cycle longer than 5 years or exceed $5 million should conduct one or more midterm evaluations, as well as a final evaluation. Programs that exceed $25 million must conduct periodic midterm evaluations and a final evaluation. To leverage external expertise for programs exceeding $25 million, State/INL has recommended final evaluations be conducted by an independent party. As a key component of effective program management, evaluation assesses how well a program is working and helps managers make informed decisions about current and future programming. Evaluation provides an overall assessment of whether a program works and identifies adjustments that may improve its results. Types of evaluation include process (or implementation), outcome, impact, and cost-benefit and cost-effectiveness analyses. First, process (or implementation) evaluation assesses the extent to which a program is operating as it was intended. Second, outcome evaluation assesses the extent to which program goals or targets are met. Third, impact evaluation is a form of outcome evaluation that assesses the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. Finally, cost-benefit and cost- effectiveness analyses compare a program’s outputs or outcomes with the costs to produce them. The bureau has conducted only one evaluation of a program that includes foreign police assistance activities because it lacked guidelines and a culture among program offices that supported evaluation, according to State/INL officials. For State/INL’s one outcome evaluation, State/INL reported that the U.S. Embassy, Beirut, hired a contractor to evaluate its training program for the Lebanese Internal Security Forces between November 2010 and May 2011. The purpose of the evaluation was to assess if the training had been successful, as well as to provide recommendations for its improvement. The final report was submitted to State/INL in June 2011. It identified what elements of the program worked and why the training failed to achieve its higher-order objectives. For example, the evaluators noted that the police training program had trained over 5,000 Lebanese Internal Security Forces personnel and that the training had been largely effective. However, the report concluded that the design of the training was not informed by a systematic assessment of training needs and engagement from the Lebanese Internal Security Forces during the planning process. In response to State’s February 2012 Evaluation Policy, State/INL is developing its annual evaluation plan, according to State/INL officials. The new policy requires that (1) all large programs, projects, and activities be evaluated at least once in their lifetime or every 5 years, whichever is less; (2) bureaus determine which programs, projects, or activities to evaluate; (3) bureaus evaluate two to four projects, programs, or activities over the 24-month period beginning with fiscal year 2012; and (4) program managers identify up to 3 to 5 percent of their resources for evaluation activities. State/INL officials said the bureau will assess its guidelines to ensure they are consistent with State’s policy and incorporate them into its annual evaluation plan. State/INL officials said that the bureau is implementing its monitoring and evaluation guidelines in phases beginning with its largest programs in Iraq and Mexico. Other priority programs for independent external evaluations include Afghanistan, Colombia, the Palestinian Territories, and Pakistan. For Iraq, State/INL officials said they have established a three-person monitoring and evaluation unit for the bureau’s Police Development Program. The unit recently used its civilian police advisers to conduct a baseline assessment of Iraqi law enforcement capabilities and is relying on advice from State/INL’s Office of Resource Management. For example, the office is assisting the unit with developing program objectives and performance measures to ensure they are specific, measurable, attainable, realistic, and timely. State/INL officials identified numerous goals, functions, objectives, tasks, and indicators for the bureau’s Police Development Program in Iraq. For example: Goal: Iraq’s Police Training Systems provide basic and advanced instruction to impart the skills required while promoting community policing, gender, and human rights. Function: Community Policing/Community Relations—Police specifically trained in establishing and maintaining positive relationships between the law enforcement agency and the public for the purpose of identifying and solving crimes, enhancing public service, and building community trust in the police. Objective: Ministry of Interior establishes Community Policing Training Task 1: Review existing curriculum for community policing training. Task 2: Assist General Directorate for Training Qualification as requested to ensure community policing curriculum adopts and integrates international human rights standards in terms of police service delivery. State/INL’s program office for Mexico has dedicated $3 million in fiscal year 2011 funds to conduct evaluations of its programs and is in the process of identifying contracting mechanisms to complete them, including institutions of higher education in Mexico. U.S. agencies have implemented various mechanisms to coordinate their foreign police assistance activities as part of wider foreign assistance activities. Such mechanisms include (1) interagency policy committees chaired by the National Security Council (NSC) that coordinate policies at a high level; (2) headquarters working groups established to coordinate specific issues, such as antiterrorism and nonproliferation; (3) various working groups at the overseas posts; and (4) special positions to coordinate foreign police assistance activities. However, we noted some areas for improvement, including lack of defined agency roles and responsibilities and inconsistent information sharing. Interagency groups at various levels coordinate policy, guidance, and activities related to assistance to foreign police. NSC coordinates policies at the highest level of government through interagency policy committees. For example, an NSC-led interagency policy committee on security sector assistance, which includes assistance to foreign police, is conducting a policy review of the security sector. This committee does not conduct coordination or oversight of the actual provision of assistance. One of the goals of the committee is to define the roles and missions of U.S. agencies providing such assistance. The committee is also attempting to establish interagency goals and guidelines to better shape, integrate, prioritize, and evaluate U.S. government efforts in this sector. The review of security sector assistance was proposed for a variety of reasons related to a desire to improve the integration, effectiveness, and responsiveness of security sector assistance, including a proposal by Secretary of Defense Gates, according to a U.S. Institute for Peace report. In addition, according to a State official, the committee was established as a result of NSC concerns about DOD’s increasing role in providing foreign assistance. According to officials of agencies participating in the committee, membership includes NSC, Office of Management and Budget, DOD, State, USAID, Treasury, DOJ, and DHS. The attendees are usually assistant secretaries or deputy assistant secretaries. Working-level officials participate in subgroups such as those on roles and responsibilities. The table below provides examples of various coordination mechanisms. In addition, for Iraq, from 2009 through the end of 2011, the key mechanism for managing the transfer of responsibilities from DOD to State was the Iraq Enduring Presence Working Group composed of individuals from offices in Baghdad and Washington, D.C. In addition to this working group, the embassy’s management section operated an interagency structure composed of 13 sub-working groups that covered all major areas of the transition—provincial affairs, police training, security, and administrative and support initiatives. According to State, DOD, DOJ, and DHS officials, the interagency policy committee on security sector assistance has met sporadically since its inception, which has contributed to delays in issuing a final report and associated recommendations that would address the roles and responsibilities of the various agencies and provide overall U.S. government policy guidance on security sector assistance. Agreeing on roles and responsibilities is a key practice that can enhance interagency collaboration. According to State, DOD, USAID, and DOJ officials, the committee began meeting sometime in 2009 but stopped in December 2010. A State/INL official said the committee reconvened in June 2011 and met or provided documents for review weekly through September. The committee met for a final session to review conclusions and policy recommendations in April 2012. State and DOD officials stated that they reviewed and commented on a draft policy directive on roles and responsibilities that was issued in 2011 and one that was issued in early 2012. Agencies reviewed proposed draft policy on roles and missions in April 2012 for final review. State officials attributed the lack of regular meetings to National Security Staff turnover and workload issues. While State and DOD had mechanisms to manage the transition from DOD to a State-led police development program in Iraq, they did not consistently share information. Establishing collaborative mechanisms to share information with partners is also a key practice for enhancing and sustaining interagency collaboration. Moreover, timely dissemination of information is critical for maintaining national security. The key mechanism for managing the transition was the Iraq Enduring Presence Working Group, composed of individuals from offices in Baghdad and Washington, D.C. In addition, the 2010 Joint Campaign Plan for Iraq—a strategic document composed and approved by top State and DOD officials in Iraq—included tasks State would need to consider as part of the transition. Despite these mechanisms, there was inconsistent and incomplete sharing of operational readiness assessments of the Iraqi police by DOD. Though State requested official copies of these assessments, DOD did not provide them. According to a former DOD civilian police adviser, DOD destroyed the database that contained the assessments of the Iraqi police forces during the transition, because it had completed its mission to train the Iraqi police. As a result, State developed a baseline assessment of Iraqi law enforcement capabilities without the benefit of DOD’s assessments. Moreover, overseas posts do not consistently document or share the results of their coordination efforts. In 2009, we reported that information is a crucial tool in national security and its timely dissemination is critical for maintaining national security. However, State/INL officials stated that overseas posts do not provide documentation of the results of their coordination efforts. In addition, several State Inspector General reports have discussed the need for agendas and minutes for interagency groups, including in Afghanistan, Colombia, and Mexico. For example, the Inspector General reported that although the working group at the U.S. embassy in Colombia concisely addressed law enforcement issues during these meetings, there was no published agenda or minutes of these proceedings. In another case, while the law enforcement working group at the U.S. Embassy in Islamabad issues minutes to the embassy executive office, it does not necessarily share them with headquarters. The Deputy Chief of Mission for the U.S. Embassy in Bogotá, Colombia, acknowledged that the Homeland Security Group did not record the results of its coordination but stated that it will begin to issue an agenda and minutes for the meetings. The failure of overseas posts to document and disseminate their coordination efforts may hamper the agencies’ ability to have all the information they need to analyze the results of their foreign police assistance activities. Foreign partners’ counterinsurgency, counternarcotics, counterterrorism, and anticrime capabilities are critical to U.S. national security objectives. As such, interagency collaboration is essential to ensuring that U.S. agencies effectively and efficiently manage the resources they contribute to training and equipping foreign police forces. However, U.S. government agencies lack clearly defined roles and responsibilities for providing security sector assistance, including assistance to foreign police forces. While NSC has been tasked with leading efforts to define agencies’ roles and responsibilities, progress to date has stalled. U.S. agencies providing foreign police assistance need to define and agree on their roles and responsibilities to ensure that they make the most rational decisions about U.S. efforts to enhance foreign police forces’ capability. In addition, the lack of information sharing and documentation among agencies at some overseas posts providing foreign police assistance can inhibit the effectiveness of future U.S. assistance efforts. To better prioritize, evaluate, and avoid duplication of U.S. efforts to provide foreign police assistance, we recommend that NSC complete its efforts to define agency roles and responsibilities. To ensure that information is available for future U.S. foreign police assistance efforts, we recommend that the Secretaries of Defense and State establish mechanisms to better share and document information among various U.S. agencies. We provided a draft of this report to DOD, State, DOE, USAID, Treasury, DOJ, DHS, and NSC. State and DHS provided written comments which are reproduced in appendices VI and VII. DOD provided comments by e- mail. In addition, State, DOD, DOE, Treasury, DOJ, and NSC provided technical comments that were incorporated as appropriate. USAID noted that it had no comments. NSC did not comment on the report’s recommendations. DOD concurred with the report’s recommendation to establish mechanisms to better share and document information among various U.S. agencies. State partially concurred and described actions it was continuing to take to collaborate with other federal agencies. State noted that it will work with its interagency partners to identify ways to improve the sharing of best practices and lessons learned concerning U.S. foreign police assistance efforts. DHS noted that it remains committed to continuing its work with interagency partners such as the U.S. Department of Justice and other relevant agencies. This includes work to better define agency roles and responsibilities, as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Secretaries of Defense, State, Energy, Homeland Security; and the Treasury; the Attorney General; the Administrator of USAID; the Executive Secretary of the National Security Council; and interested congressional committees. The report will also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. To identify U.S. agencies that trained and equipped foreign police forces during fiscal years 2009 through 2011, we reviewed past GAO reports, relevant legislation, and agency websites. To identify the amount of U.S. government funding made available for foreign police training and equipment activities, we examined past GAO reports; congressional budget submissions, including the Department of State’s (State) Bureau of International Narcotics and Law Enforcement Affairs’ (State/INL) program and budget guides for fiscal year 2011; the Afghanistan Security Forces Fund fiscal year 2012 congressional budget justification; and the Department of Defense (DOD) fiscal year 2012 congressional budget justification and other budget documents. To identify countries and police assistance activities, we reviewed funding amounts reported to GAO by agencies, the fiscal year 2012 budget appendix, congressional budget submissions, agency annual reports, interagency agreements, and other program documents. We also interviewed officials from the Departments of Defense, State, and Energy (DOE); the U.S. Agency for International Development (USAID; and the Departments of the Treasury, Justice (DOJ), and Homeland Security (DHS). We collected data for fiscal year 2010 and fiscal year 2011 to update foreign police assistance funding information provided in our prior report. We used the same definition of police assistance that we used in the previous report. We defined police training and equipment activities (which we referred to as “police assistance”) as all training—regardless of its content—and equipment provided to law enforcement units or personnel with arrest, investigative, or interdiction authority. Officials from the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats (DASD-CN>) updated information on DOD police assistance from fiscal year 2009 for fiscal years 2010 and 2011 using the definition we developed with State. DASD-CN> has oversight of program funding through a web-based database. However, specific funds for police assistance are managed at the combatant command level. The data DASD-CN> compiled included allotments for fiscal years 2010 and 2011 that were provided by combatant commands for training and equipping activities. Using our definition, DOD’s Defense Threat Reduction Agency provided funding data based on allotments from DOD’s defensewide operations and maintenance account. We also obtained total amounts made available after reprogramming for Afghanistan, Iraq, and Pakistan from the Afghanistan Security Forces Fund, Iraq Security Forces Fund, and Pakistan Counterinsurgency Fund from the Comptroller’s Office in the Office of the Secretary of Defense. We reviewed these figures along with congressional budget justifications and DOD’s fourth quarter, fiscal year 2011 report to Congress on the Iraqi, Afghan, and Pakistan Security Forces, as required by Section 9009 of DOD’s Appropriation Act for fiscal year 2011. We combined data from all funding sources to derive the DOD total. We included funding for equipment and transportation, training, and sustainment. We excluded any infrastructure costs because such costs are not typical of most police assistance activities. We compared the DOD data for reasonableness of the reported information and questioned DOD officials about their methodology and the reliability of the data. Some of the data may have included both military and civilian police personnel, which might result in overestimating DOD funding. However, for fiscal years 2010 and 2011, the majority of DOD funds (over 90 percent) were provided through the Afghanistan Security Forces Fund, Iraq Security Forces Fund, and Pakistan Counterinsurgency Fund, which separate funds provided to military and civilian personnel. To identify any discrepancies in the funding data, we compared the data from fiscal year 2010 and fiscal year 2011 with that provided for fiscal year 2009. We reconciled discrepancies with the agencies and determined that the data were sufficiently reliable for our purposes. State/INL analyzed data reported in its annual program and budget guides to provide allocations for police-assistance activities that fit our definition. The funding data covered all country programs funded through the International Narcotics Control and Law Enforcement (INCLE) account directed to law enforcement, stabilization operations, counternarcotics, border control, and transnational crime. State also used the definition to identify police assistance funded through other foreign assistance accounts. State analyzed appropriations and obligations funding data from the Foreign Assistance Coordination and Tracking System database, which tracks data on U.S. foreign assistance programs. Allotments or allocations were provided for the Assistance for Europe, Eurasia and Central Asia account and the Nonproliferation, Antiterrorism, Demining, and Related Programs account. State also provided obligations for the Pakistan Counterinsurgency Capability Fund, which received funds transferred from DOD’s Pakistan Counterinsurgency Fund, and allotments from funding transferred from DOD to State under Section 1207 authority of the fiscal year 2006 National Defense Authorization Act. We compared State’s data for fiscal year 2009 with data for fiscal years 2010 and 2011 for reasonableness. We also questioned State officials about their methodology, reviewed the program and budget guides, reviewed other GAO reports that used the same data sources, and discussed data reliability with agency officials. We determined that the data were sufficiently reliable for our purposes. We combined the data from the various funding accounts to derive the State total. The State data included funding provided to Treasury, DOJ, and DHS. It excluded funding provided to State from other agencies, with the exception of Pakistan Counterinsurgency Capability Fund and 1207 funds transferred from DOD. We excluded any infrastructure costs because such costs are not typical of most police assistance activities. DOE’s National Nuclear Security Administration provided allotments and obligations funding data for police assistance for fiscal year 2010 and fiscal year 2011 in response to our request for funding data based on our definition. DOE’s funds were made available from its Defense Nuclear Nonproliferation account. USAID reviewed the Foreign Assistance Coordination and Tracking System database by program element to identify programs that might have a civilian policing component. USAID then consulted with its geographic bureaus and its overseas missions to obtain detailed data not available at headquarters. USAID provided us with funding data based on allotments for activities that included civilian police training. We reviewed the data for reasonableness and discussed their reliability with agency officials. We determined that the data were sufficiently reliable for our purposes. We excluded programs that did not meet our definition, such as judicial exchanges. Treasury provided appropriations funding for police assistance from its Economic Crimes division for fiscal years 2010 and 2011 in response to our request for funding data based on our definition. Funds were made available from the Treasury International Affairs Technical Assistance account and included supplemental funding provided during fiscal year 2010. For DOJ, we used funding data provided by the Federal Bureau of Investigation (FBI) and the Drug Enforcement Administration (DEA). The FBI explained that the primary purpose of its foreign police training is not to provide ‘foreign assistance. Rather, the primary purpose of such training is to further the FBI’s statutorily authorized mission to detect, investigate, and prosecute crimes against the United States, which include federal crimes of terrorism and other crimes that the FBI is authorized to investigate extraterritorially. FBI provided funding data using its definition of police assistance: any activity, including the provision of equipment in association therewith, that is intended to develop or enhance foreign law enforcement capabilities to prevent, deter, detect, investigate, or respond to criminal or terrorist acts or support public safety and security. Such training occurs both in the United States and abroad. FBI officials explained that our definition would exclude some types of law enforcement personnel, such as crime lab technicians, who do not have arrest authority, and that they could not isolate such individuals from their submission. FBI provided data on obligations that were also disbursed based on its definition. This definition did not materially affect the total amount of U.S. funding. We reviewed the data for reasonableness and determined that the data were sufficiently reliable for our purposes. DEA provided data on obligations that were also disbursed in response to our request for data based on our definition. DEA provided funding data by country for Afghanistan, Colombia, and Mexico, but neither DEA nor FBI provided total funding data by country. We combined the funding data provided by DOD, State, DOE, USAID, Treasury, and DOJ to obtain total U.S. government funds made available. The amounts are estimates because, according to agency officials, agencies do not generally track funding by a category specifically for activities to train and equip foreign police. In addition, to estimate funding for all elements of police training, the agencies relied on project code reports, manual estimates, and data calls to overseas posts. On the basis of our review of the data and discussions with agency officials, we determined that the data were sufficiently reliable for a broad estimate of U.S. government funding. To assess the extent to which DOD and State/INL report on the results of their police assistance activities for countries with their largest programs, we reviewed GAO reports, including those that examined the capabilities of the Iraqi Security Forces and Afghan National Security Forces, including the Iraqi national police and Afghan National Police (ANP). We also reviewed DOD’s October 2011 Report on Progress toward Security and Stability in Afghanistan. Within DOD, we spoke with officials from U.S. Central Command and the Afghan National Security Forces Desk. Within State, we spoke with officials from relevant components about State/INL’s monitoring and evaluation guidelines, including the Office of Resource Management, Office of Program Assistance and Evaluation, Office of Iraq Programs, and the Office of Afghanistan and Pakistan. To identify reporting requirements, we reviewed letters of agreement and interagency agreements provided by State/INL. Further, we reviewed relevant documents including State/INL guidelines for program monitoring and evaluation and one evaluation completed by State/INL for its police assistance activities. To examine the mechanisms U.S. agencies use to coordinate their police assistance activities, we reviewed GAO reports, including those describing practices for enhanced interagency collaboration; State Office of Inspector General reports; and other reports, legislation, and documents describing NSC’s interagency policy committees. We also interviewed State, DOD, DOJ, DHS, Treasury, and USAID officials, including officials who participated on the NSC Security Sector Assistance Interagency Policy Committee. We also interviewed State and U.S. law enforcement officials at the U.S. embassies in Bogotá, Colombia, and Lima, Peru. On the basis of the document review and the testimonial evidence, we identified mechanisms for coordinating foreign police assistance and areas for improvement. We did not assess the overall effectiveness of the coordinating mechanisms. We conducted this performance audit from April 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our work objectives. This appendix provides information on DOD and State funds made available for police assistance activities during fiscal years 2010 and 2011, by region and country. DOD and State funds constituted about 98 percent of U.S. funds for these purposes. We did not include funds made available from other agencies because they provided only 2 percent of U.S. funds and not all agencies provided information by individual countries. These graphs also do not include regional funds, which totaled $182 million for DOD and State in fiscal year 2010 and $186 million for DOD and State in fiscal year 2011. Our analysis of DOD and State data shows that both DOD and State made available funds for police assistance activities in 8 of 12 recipient countries in the South and Central Asia region for fiscal years 2010 and 2011 (see fig. 5). For Afghanistan, agencies made available more than $3 billion each year, and for Pakistan, agencies made available between $176 million and $299 million each year. Agencies made available less than $10 million per country each year for 8 of the remaining 10 countries. As shown in figure 6, both DOD and State made funds available for police assistance activities in 3 of the 13 recipients in the Near East for fiscal year 2010 and 4 of the 13 recipients in the Near East for fiscal year 2011. State alone made assistance available for 10 recipients in fiscal year 2010 and 9 recipients in fiscal year 2011. State and DOD made available more than $972 million for Iraq in fiscal year 2010 to train and equip the Iraqi security forces, including the provision of equipment, supplies, services, training, facility and infrastructure repair, and renovation. State alone made available $97 million and $142 million to the Palestinian Territories in fiscal years 2010 and 2011, respectively. Agencies made available less than $10 million per country in each fiscal year for 9 of the remaining 11 countries. Figure 7 shows that both DOD and State made funds available for police assistance activities in 7 of 20 countries in fiscal year 2010 and in 6 of 18 countries in fiscal year 2011 in the Western Hemisphere. DOD alone made funds available for assistance in 7 countries each fiscal year, while State alone made funds available in 6 countries in fiscal year 2010 and 5 countries in fiscal year 2011. In fiscal year 2010, Colombia, Mexico, and Haiti each had more than $100 million made available for police assistance activities, while in fiscal year 2011, only Colombia had more than $100 million made available. In fiscal year 2010, agencies made less than $10 million available in police assistance for 14 countries, and in fiscal year 2011, agencies made less than $10 million available in police assistance for 13 countries. Figure 8 shows that both DOD and State made available police assistance in 11 of 21 countries in fiscal year 2010 and 10 of 20 countries in fiscal year 2011 in Europe and Eurasia. DOD alone made assistance available to 7 countries in fiscal year 2010 and 5 countries in fiscal year 2011, while State alone made assistance available to 3 countries in fiscal year 2010 and 5 countries in fiscal year 2011. In fiscal year 2010, agencies made available between $1 million and $22 million in police assistance to each of 11 countries, while agencies made available less than $1 million to each of 10 countries. In fiscal year 2011, agencies made available between $1 million and $8 million in police assistance to 12 countries, while agencies made available less than $1 million to each of 8 countries. As shown in figure 9, both DOD and State made police assistance available in the Africa region in 12 of 29 countries in fiscal year 2010 and 12 of 35 countries in fiscal year 2011. DOD alone made assistance available in 6 countries in fiscal year 2010 and 16 countries in fiscal year 2011. State alone made assistance available in 11 countries in fiscal year 2010 and 7 countries in fiscal year 2011. DOD and State made available between $1 million and $14 million to each of 16 countries in fiscal year 2010, and from $1 million to $11 million to each of 13 countries in fiscal year 2011. Agencies made less than $1 million available to each of 13 countries in fiscal year 2010 and less than $1 million to 22 countries in this region in fiscal year 2011. Figure 10 shows that both DOD and State made funds available for police assistance in 6 of 12 countries for fiscal year 2010 and 6 of 14 countries for fiscal year 2011 in the East Asia and Pacific region. DOD alone made assistance available in 3 countries in fiscal year 2010 and 5 countries in fiscal year 2011, while State alone made funds available in 3 countries each fiscal year. DOD and State made between $1 million to $16 million available to each of 6 countries in this region in fiscal year 2010 and 5 countries in fiscal year 2011. Agencies made less than $1 million available in police assistance to 6 countries in fiscal year 2010 and 9 countries in fiscal year 2011. This appendix provides information on DOD and State amounts made available for police assistance activities by account during fiscal years 2010 and 2011 (see tables 4 and 5). For a description of accounts, see table 4. For the amounts made available from each account, see table 5. Profiles on Afghanistan, Iraq, Pakistan, Colombia, Mexico, and the Palestinian Territories can be found on the following pages. Related GAO Work In GAO-09-280, we reported that the Combined Security Transition Command-Afghanistan had begun retraining Afghan National Police units through its Focused District Development program. However, a lack of military personnel constrained the command’s plans to expand the program. We also reported in GAO-10-291 that U.S. agencies reported progress within counternarcotics program areas, but we were unable to fully assess the extent of progress because of a lack of performance measures and interim performance targets to measure Afghan capacity. GAO Recommendations In GAO-09-280, we recommended that the Secretaries of Defense (DOD) and State provide dedicated personnel to support creation of additional police mentor teams to expand and complete the Focused District Development program. In September 2010, we closed this recommendation as implemented because DOD and State took actions to increase trainers and mentors for the Afghan police. centers in Kunduz and Heart Provided antiterrorism assistance to build capacity in protection of national leadership and explosives incident countermeasures Provided introductory and advanced training to members of the Sensitive Investigation Unit of the Counternarcotics National Police. This training focused on investigative methods for apprehending drug traffickers. Related GAO Work During April and May 2012, we provided briefings to staff on selected committees regarding U.S security assistance to Iraq. The Sensitive But Unclassified briefings covered the transition of lead responsibility from DOD to State for U.S. assistance to Iraq’s military and police.  Assumed full responsibility for the U.S. presence in Iraq in fiscal  Funding for 2010 supported U.S. personnel hired to position State to assume responsibility for the police development mission in Iraq. Activities included developing plans and requirements for transitioning police development from DOD to State, training curricula, statements of work, position descriptions, comprehensive work plans, and oversight and administrative processes. GAO Recommendations Not applicable. Related GAO Work GAO-11-860SU is sensitive but unclassified Provided aviation support through flight and maintenance training to civilian Pakistani law enforcement agencies Provided training, technical assistance, and equipment to law enforcement entities, including train-the-trainer and instructor development courses Provided training and equipment to Pakistani law enforcement Provided antiterrorism assistance to build capacity in protection of national leadership, critical incident management, and protection of digital infrastructure Provided equipment, including protective equipment such as helmets and night vision devices to the Frontier Corps Provided counternarcotics training and equipment Provided radiation detection equipment to the Port of Qasim. The program included refresher training for Pakistani officials on radiation detection equipment. Related GAO Work In GAO-09-71, we found that U.S.- funded helicopters provided the air mobility needed to rapidly move Colombian counternarcotics and counterinsurgency forces. U.S. advisers, training, equipment, and intelligence assistance helped professionalize Colombia's military and police forces. We also reported that State and the other U.S. departments and agencies had accelerated their nationalization efforts, with State focusing on Colombian military and National Police aviation programs. GAO Recommendations We recommended that State, in conjunction with the other departments, USAID, and Colombia, develop an integrated nationalization plan that defines U.S. and Colombian roles and responsibilities, future funding requirements, and timelines. State agreed and noted that its annual multiyear strategy report offers the most useful format to address our recommendation. However, we did not believe this report sufficiently addressed our recommendation. In September 2011, State/INL officials in Colombia reported it reached agreement with the government of Colombia to nationalize aircraft, contractor personnel, facility maintenance, and other programs. For example, State/INL officials in Colombia told us they plan to nationalize 103 aircraft by 2014, which would represent an annual cost savings of $83 million. Related GAO Work In GA0-10-837, we reported on the Mérida Initiative, which provides training and equipment to law enforcement in Mexico and Central American countries. We found that deliveries of equipment and training had been delayed by challenges associated with an insufficient number of staff to administer the program, negotiations on interagency and bilateral agreements, procurement processes, changes in government, and funding availability. We also found that while State had developed some of the key elements of an implementation strategy, its strategic documents lacked certain key elements that would facilitate accountability and management. In addition, State had not developed a comprehensive set of timelines for all expected deliveries, though it plans to provide additional equipment and training in both Mexico and Central America. Provided training and equipment under the Mérida Initiative to help address the problem of increasing crime and violence in Mexico and Central America. Equipment included aircraft and boats. Provided antiterrorism assistance to build capacity in protection of Provided counternarcotics support including pilot and maintenance training, surveillance aircraft, information sharing, technical advice, and related support SLD provided radiation detection equipment for cargo scanning at five Mexican ports. This includes fixed and handheld equipment, maintenance and in-country training to officials in the ports of Altamira, Lazaro, Cardenas, Manzanillo, and Veracruz. Additional technical assistance was provided to Mexican Customs officials at a national level. GAO Recommendations We recommended that the Secretary of State incorporate into the strategy for the Mérida Initiative outcome performance measures that indicate progress toward strategic goals and develop more comprehensive timelines for future program deliveries. State agreed and is working to develop better metrics and more comprehensive timelines. As of April 2012, State is revising its performance measures, according to State officials. GAO will examine the extent to which these efforts address the recommendation in a separate engagement. GAO Summary In GAO-10-505, we reported that although U.S. and international officials said that U.S. security assistance programs for the Palestinian Authority had helped improve security conditions in some West Bank areas, State and the Office of the United States Security Coordinator (USSC) had not established clear and measurable outcome-based performance indicators to assess progress. State and USSC officials noted that they planned to incorporate performance indicators in a USSC campaign plan to be released in mid-2010. Open GAO Recommendation We recommended that, as State developed the USSC campaign plan for providing security assistance to the Palestinian Authority, the Secretary of State should define specific objectives and establish outcome-based indicators enabling it to assess progress. State partially concurred with this recommendation. It agreed with the need for more performance-based indicators but noted that factors outside its control influence progress. GAO continues to monitor this development. As a part of its assessment process in Afghanistan, DOD uses criteria— called capability milestones—to assess the professionalism and capacity of departments under the Afghan Ministry of Interior, including components of the ANP. Departments are assessed against four capability milestones that range from 1 to 4. A department rated at 1 is fully capable of conducting its primary operational mission but may require coalition oversight. By contrast, a department rated at 4 has been established but cannot accomplish its mission. DOD’s basic assessment system in Iraq contained capabilities ratings in the areas of personnel, command and control, equipment, sustainment/logistics, training, and leadership. Commanders used the assessment results and their professional judgment to determine a unit’s overall readiness level. The assessment reports also included the commanders’ estimates of the number of months needed before a unit could assume the lead for counterinsurgency operations. DOD also reported readiness assessments for headquarters service companies, such as engineering and signal units that support combat units. The assessment reports included the coalition commander’s narrative assessments of the Iraqi unit’s overall readiness level, known as the Performance Capability Assessment, which was designed to clarify the overall assessment. The narrative assessed the Iraqi unit’s leadership capabilities, combat experience, and ability to execute intelligence-based operations, and described any life support issues affecting the Iraqi unit’s capabilities. Commanders also explained and addressed any regression in the unit’s overall assessment level and listed the top three issues preventing the unit from assuming the lead for counterinsurgency operations or advancing to the next level. Remarks were intended to provide information and details that would help resolve the problems that degrade the unit’s status. Details on DOD’s assessments of the Pakistan Security Forces are classified. The table below provides definition of the capability milestones, as identified in DOD’s October 2011 Report on Progress toward Security and Stability in Afghanistan. According to DOD’s October 2011 report, advisers from the North Atlantic Treaty Organization Training Mission–Afghanistan and Combined Security Transition Command–Afghanistan used capability milestones to assess individual offices and cross-functional activities on a quarterly basis against specific end-state objectives, quarterly milestones, and skill- building requirements. For example, DOD reported in October 2011 that the Afghan National Civil Order Police advanced from requiring some coalition assistance to requiring minimal coalition assistance. In addition to the individual named above, Judy McCloskey (Assistant Director), Lynn Cothern, Brian Egger, Mark Needham, and La Verne Tharpes made key contributions to this report. Robert Alarapon, Martin De Alteriis, Etana Finkler, Mary Moutsos, and Anthony Pordes provided technical support.
In April 2011, we reported that the United States provided an estimated $3.5 billion for foreign police assistance to 107 countries during fiscal year 2009. We agreed to follow up that report with a review of the extent to which U.S. agencies evaluated and coordinated their foreign police assistance activities. As such, this report (1) updates our analysis of the funding U.S. agencies provided for foreign police assistance during fiscal years 2009 through 2011, (2) examines the extent to which DOD and State/INL assess or evaluate their activities for countries with the largest programs, and (3) examines the mechanisms U.S. agencies use to coordinate foreign police assistance activities. GAO focused on DOD and State because they have the largest foreign police assistance programs. GAO analyzed program and budget documents and interviewed officials from DOD, State, Energy, the U.S. Agency for International Development, Justice, the Treasury, and Homeland Security. The United States provided an estimated $13.9 billion for foreign police assistance during fiscal years 2009 through 2011. Funds provided by U.S. agencies rose and then fell between fiscal years 2009 and 2011. During fiscal years 2009 through 2011, the United States provided the greatest amount of its foreign police assistance to Afghanistan, Iraq, Pakistan, Colombia, Mexico, and the Palestinian Territories. Department of Defense (DOD) and State (State) funds constituted about 97 percent of U.S. funds for police assistance in fiscal year 2009 and 98 percent in fiscal years 2010 and 2011. DOD and State’s Bureau of International Narcotics and Law Enforcement Affairs (State/INL) have acknowledged limitations in their procedures to assess and evaluate their foreign police assistance activities and are taking steps to address them. DOD assesses the performance of the police forces it trains and equips in Afghanistan, Iraq, and Pakistan. However, the assessment process for Afghanistan does not provide data on civil policing effectiveness. DOD plans to expand its assessments to obtain data to assess the ability of these forces to conduct civil policing operations. In addition, recognizing that it had conducted only one evaluation of its foreign police assistance activities because it lacked guidelines, State/INL is developing an evaluation plan that is consistent with State’s February 2012 Evaluation Policy. This evaluation plan includes conducting evaluations for its largest programs in Iraq and Mexico. U.S. agencies have implemented various mechanisms to coordinate their foreign police assistance activities as part of wider foreign assistance activities, such as the National Security Council’s (NSC)-led interagency policy committees that coordinate policies at a high level and various working groups at the overseas posts. However, GAO noted some areas for improvement. Specifically, NSC has not defined agencies’ roles and responsibilities for assisting foreign police. Further, DOD and State do not consistently share and document information. For example, DOD did not provide copies of its capability assessments of the Iraqi police to State, which is now responsible for police development in Iraq, because it destroyed the database containing the assessments at the end of its mission to train the police. Further, some U.S. embassies, including the one in Bogotá, Colombia, do not publish agendas or minutes of their proceedings. GAO recommends that (1) NSC complete its efforts to define agency roles and responsibilities, and (2) the Secretaries of Defense and State establish mechanisms to better share and document information among various U.S. agencies. NSC provided technical comments, but did not comment on our recommendation. DOD concurred and State partially concurred, noting the importance of interagency collaboration.
The International Boundary and Water Commission was established in March 1889 by treaty between the governments of the United States and Mexico. Under the treaty and subsequent agreements, the Commission is responsible for resolving boundary problems and maintaining the boundary between the United States and Mexico and managing issues involving the waters of the Rio Grande and Colorado Rivers. The focus of Commission responsibilities has evolved over time to include resolving border water quality problems and, more recently, to designing, constructing, and operating and maintaining wastewater treatment facilities along the border (see fig. 1). Much of this change in responsibilities has occurred in response to the expansion of economic activity and the growth of population along the border. These developments have heightened the need for additional water sources and an enhanced environmental infrastructure. 2000 The International Boundary and Water Commission is composed of a U.S. Section and a Mexican Section, each headed by a Commissioner, who must be an engineer. The U.S. Commissioner is appointed by the President for an indefinite term. The current Commissioner was appointed on June 15, 1994. The U.S. Section is located in El Paso, Texas; the Mexican Section is in the adjoining city of Ciudad Juarez, (Chihuahua) Mexico. As of July 1998, the U.S. Section had 254 staff at its headquarters and project offices located along the border. (See fig. 2.) The U.S. Section must comply with applicable federal rules and regulations regarding financial management and contracting, including the Federal Acquisition Regulation. When problems such as the need for wastewater treatment plants on the border require joint actions to resolve, the two Commissioners work together to define the problem, plan the solution, and negotiate the level of participation for each country. The Commissioners jointly prepare draft agreements (referred to as “Minutes”) on all aspects of each country’s participation (including cost-sharing arrangements) to present to both countries’ governments for approval. For joint projects determined to require binding international obligations, the U.S. Commissioner must obtain the approval of the Secretary of State. The U.S. Section receives its direct appropriations through the Department of State’s budget. The Section’s appropriations for salaries, expenses, and construction activities totaled $85.7 million from fiscal years 1994 through 1997. The Section also receives contributions from federal, state, and local municipalities and the government of Mexico to help construct new projects and operate and maintain existing facilities, such as wastewater treatment plants. Contributions for these purposes totaled approximately $132.2 million from fiscal years 1994 through 1997. Total funding for the 4-year period, therefore, came to approximately $217.9 million, as shown in table 1. EPA provided $123 million during this period. Approximately $108 million was provided to construct wastewater treatment facilities along the border; another $15 million was given for the administration of an EPA-supported facilities planning program for resolving border sanitation problems. The remaining funds were provided by Mexico ($1.6 million), the General Services Administration and the Western Area Power Administration ($1.9 million), local municipalities ($4.4 million), and other ($1.3 million). The U.S. Section received funds totaling approximately $46.7 million in fiscal year 1997. These funds were used for U.S. Section operations, project engineering activities, operation and maintenance of existing projects, and construction activities. The expenditures included payments for personnel and benefits, training, travel, and supplies and materials; operating and maintaining field offices, dams, and sanitation plants; monitoring river water quality; and directing various construction projects. (See table 2.) Our examination of certain aspects of the U.S. Section’s financial and accounting system found several weaknesses. These weaknesses included problems in recording reimbursements and accounting for funds owed by Mexico. We also observed that the U.S. Section did not follow applicable internal control standards regarding separation of duties and had not yet corrected previously identified financial management deficiencies. In addition, we noted that the U.S. Section has had no external financial statement audits conducted since 1995. In light of prior audit findings and the size of reimbursements to the U.S. Section by Mexico, we examined the U.S. Section’s accounting procedures for billings to Mexico. We found that the U.S. Section had not corrected deficiencies identified in prior external audit reports. In fact, we observed that approximately $16 million owed by Mexico for construction and operations and maintenance costs, including $400,000 that had been billed between July 23, 1997, and March 6, 1998, was not properly recorded as required by generally accepted accounting principles for federal financial reporting purposes. These billings were for the South Bay, California, and Nogales, Arizona, Wastewater Treatment facilities. Since these receivables were not included in the accounting records, the U.S. Section’s financial statements and reports did not reflect the Section’s true financial position. This discrepancy occurred because the U.S. Section lacks an integrated accounting system. For example, record-keeping for funds owed by Mexico was maintained independently from the accounting and finance system. However, the U.S. Section subsequently provided documentation to adequately support payments made. To reduce the risk of error, waste, or wrongful acts and ensure that effective checks and balances exist, “Standards for Internal Controls in the Federal Government” require a separation of duties and responsibilities. Our review of selected U.S. Section disbursements in fiscal years 1997-98 identified that the individual who created an obligation for an expenditure also had the authority to approve a bill for payment without any requirement for approval from contracting or procurement officials that goods or services were received. This scenario is inconsistent with the guidance contained in the “Standards for Internal Controls in the Federal Government,” which call for the separation of duties. To ensure that resources are not put at risk and that financial reports are based on accurate data, federal internal control standards require prompt resolution of audit findings. We found that the U.S. Section had not corrected 11 of 26 deficiencies, or about 42 percent, identified in annual financial statement audits conducted from fiscal years 1992 through 1995. As these deficiencies remained, the U.S. Section was vulnerable to unreliable financial reporting, noncompliance with laws and regulations, and inadequate safeguarding of assets. The 11 deficiencies that the U.S. Section had not corrected include no procedures for tracking and recording costs to prepare annual financial no monitoring of receivable accounts and failure to assess interest on late payments; insufficient procedures to record amounts due from state and local governments and the government of Mexico to ensure that all amounts due were properly recorded; outdated and/or incomplete written accounting policies and procedures; no performance of periodic vulnerability assessments and tests of internal no system directive providing policy guidance to establish a single, integrated financial management system; no submission of payment performance data to the Office of Management and Budget as required by the Prompt Payment Act of 1983; no audit follow-up system or procedures to evaluate the system; no system to identify and monitor compliance with applicable laws and no performance of periodic, independent reviews of electronic data no performance of periodic, physical counts of inventory on hand. We examined the negotiated cost-sharing arrangements for the five most recent Commission projects undertaken jointly by the United States and Mexico. These projects had terms that varied from those for the other three. For three projects—two wastewater treatment plants and a cross-border bridge—each country assumed full responsibility for their respective project costs. However, for the other two projects—the Nogales Wastewater Treatment Plant, completed in 1992, and the South Bay Wastewater Treatment Plant, completed in 1998—the United States financed Mexico’s share of the construction costs with a no-interest loan. Mexico will repay the loaned funds in 10 annual installments and was given a grace period until the plants were fully operational to initiate repayment. Mexico’s agreed-upon shares of the construction costs for the Nogales and South Bay projects were $1 million and $16.8 million, respectively. In present value terms, the net cost to the United States to finance Mexico’s share for the two projects is approximately $8.6 million, as shown in table 3. U.S. Section and Department of State officials informed us that this type of arrangement was made by the United States, following negotiation with Mexico, in response to the state of the Mexican economy and the dire need to build these joint projects in the United States, which is the preferred location from technical points of view, but where costs and standards are higher than in Mexico. This arrangement was made due to the strong desire on the part of the U.S. communities, with the support of their congressional delegations, that these two projects go forward expeditiously. The primary tool used to validate payment claims by contractors is monthly reports prepared by the on-site contract operations representative. We found that the contract operations representative did not submit these required monthly reports on the performance of the contractors at the international wastewater treatment facilities at South Bay, California, and Nogales, Arizona, to the contract administrator. The reports were not submitted because the reporting requirement was not enforced by the contract administrator. As a result, payments of $1.2 million were made without proper documentation demonstrating that the required work had been completed. U.S. Section officials agreed with our findings. They issued directives to the U.S. Section’s on-site representatives at both facilities stating that the representatives should immediately begin submitting written reports evaluating the contractor’s overall performance and documenting specific performance for each month’s work. Oversight of the U.S. Section of the Commission is minimal. While the Department of State reviews the U.S. Section’s budget requests and provides foreign policy guidance to the section, the Department told us that it does not have the authority to routinely monitor or oversee the management of the U.S. Section because the Section is not a constituent part of the Department of State. And, EPA’s oversight authority over the Commission’s operations is limited to construction projects for which it provides funding. In addition, there is no requirement that financial or program audits of the U.S. Section be conducted. In fact, the U.S. Section has not undergone an external program review since 1980, and, as pointed out earlier, no financial statement audit has taken place since 1995. Moreover, internal audits were not being conducted. Good management practices call for periodic program audits to determine the extent to which the organization is achieving the desired results or benefits established by its charter, the effectiveness of its programs and activities, and the compliance with the laws and regulations applicable to its programs. The U.S. Section has received no external program audits of its activities since 1980. The Commission acts as the project manager for selected EPA-funded projects along the southwest border. EPA officials informed us that they (1) review U.S. Section construction contracts, (2) monitor disbursement of project funds, (3) participate in periodic sessions to review the progress of projects with the U.S. Section and other involved agencies, and (4) conduct periodic site visits. They also said that, when appropriate, they contract with other entities to inspect actual construction activities. With respect to contracting, EPA provides advice to the Commission on both technical and business issues. However, EPA’s review focuses only on contracts for which it provides funding. While the head of the Department of State’s Office of Mexican Affairs told us that the Department does not routinely exercise management oversight, the Department’s Inspector General said that it has authority to conduct audits and contracted for financial statement audits from 1992 to 1995. However, in a July 1998 letter, State’s Inspector General informed us that the Inspector General has not conducted audits since 1995 due to resource constraints. Our review also found that the U.S. Section did not have a well-functioning internal audit capability. The U.S. Section recognizes that internal audit is to be used to determine, through unbiased examinations, that operations are efficient and economical and that other internal controls are sufficient, adequate, and consistently applied. Although the U.S. Section had a compliance office with internal audit responsibilities, the head of the office stated that he was working on other critical personnel issues. He told us that 10 audits had been scheduled for 1998, but no audits had been completed to date. In a July 29, 1998, letter to us, the U.S. Commissioner agreed with our observations regarding the Section’s financial and accounting systems’ weaknesses. The Commissioner told us that actions had been taken or were in process to (1) correct general ledger accounts to assure that the accounting reports reflect correct accounts receivable amounts, (2) establish proper segregation of duties and assure that all payments are approved by contracting officers, (3) correct previously identified weaknesses, and (4) provide the internal auditor more time to conduct audits. In light of the Commissioner’s actions, this report contains no recommendations for corrective actions regarding the Section’s financial and accounting systems. Border issues between the United States and Mexico form an increasingly critical part of the bilateral relationship. The Commission is involved in a growing number of issues along the U.S.-Mexico border. The expected increase in commerce between the two countries and the resulting impact on the environmental infrastructure are likely to expand the importance of the Commission’s operations. Moreover, in addition to its own annual appropriations, the Commission directs funding from other federal, state, and local sources. In light of our findings regarding the finance and accounting systems, including the failure to correct previously identified weaknesses, weaknesses in contract administration, and minimal oversight of programs and activities and the significance of the U.S. Section’s activities, we believe greater oversight of the U.S. Section’s financial and program operations is needed. In order to provide greater oversight over International Boundary and Water Commission operations, Congress may wish to consider requiring the U.S. Commissioner to obtain annual financial statement audits of the U.S. Section’s activities by an independent accounting firm in accordance with generally accepted government auditing standards. In written comments on a draft of this report, the State Department agreed with our matter for congressional consideration. The Department also provided technical comments, which we have incoporated in the report where appropriate. The Department of State’s comments are reprinted in appendix II. The EPA reviewed a draft of this report and had no comments. We are sending copies of this report to the Secretary of State; the Administrator, EPA; and the U.S. Commissioner of the International Boundary and Water Commission. Copies will also be made available to other interested parties on request. If you or your staff have any questions concerning this report, please call me at (202) 512-4128. Major contributors to this report are listed in appendix III. Our objectives were to examine (1) the sources and uses of the funds for the U.S. Section of the International Boundary and Water Commission, (2) certain aspects of the U.S. Section’s system of accounting and internal controls, (3) the cost-sharing arrangements for joint projects between the United States and Mexico, (4) the administration of U.S. Section construction and operations and maintenance contracts, and (5) the extent of oversight over the U.S. Section’s programs and operations. We conducted our review at the Department of State and the Environmental Protection Agency (EPA) in Washington, D.C.; the International Boundary and Water Commission’s U.S. Section in El Paso, Texas; the International Wastewater Treatment plants in Nogales, Arizona, and South Bay, California; and at EPA Region 9 in San Francisco, California. At all these locations, we examined available program records and files and interviewed knowledgeable officials involved with the Commission’s activities. We did not conduct a full internal control review nor a financial audit of the U.S. Section. Instead, we focused on key aspects of U.S. Section activities that were related to our audit objectives. Further, we did not evaluate the effectiveness of the U.S. Section’s activities. To identify the amount of the U.S. Section’s direct appropriation, we reviewed the U.S. Department of State’s appropriation data for fiscal years 1994 to 1998 and its budget request for fiscal year 1999. Specifically, we analyzed the U.S. Section’s funding development schedules, Department of State apportionment schedules, and reports on budget execution. To identify those funding sources in addition to the direct appropriation, we analyzed records supporting the U.S. Section’s financial statement. We reviewed the funds reimbursed to the U.S. Section for construction, as well as for operations and maintenance expenses from Mexico, other federal agencies (such as EPA), and state and local municipalities. We examined interagency agreements between the Commission and EPA for the administration of EPA’s Facility Planning Fund. We reviewed the history and status of construction project funding, schedules of anticipated and earned reimbursements, and various congressional hearing records and correspondence. To obtain an understanding of various budget and execution records, we interviewed the U.S. Section’s key financial management officers, including the Chief Administrative Officer, the Chief of the Finance and Accounting Division, and the Budget Office Analyst. We inspected the records of all funds provided to the U.S. Section for fiscal years 1994-98, identifying direct appropriations from the Department of State and all other funding sources. We also analyzed the uses of those funds and reviewed fiscal year 1997 expenditures. To determine that funds owed to the U.S. Section from Mexico and other entities for construction and operations and maintenance costs were properly included and classified in the accounts and financial reports, we reviewed appropriation and funding data documents and project Minutes to identify arrangements for and amounts of payments the accounting system should reflect. We examined accounting system reports maintained by the U.S. Section’s Finance and Accounting Division that included monthly trial balances, accounts receivable aging, and general ledger unbilled receivables. We also reviewed the Mexico receivables records maintained by the Foreign Affairs Office. We analyzed and scheduled construction and operations and maintenance receivable receipts for fiscal year 1994 through April 1998. To assess whether any corrective measures have been instituted, we reviewed earlier financial statement audits to identify and follow up on any condition of improper accountability of funds due to the U.S. Section. In cases of identified deficiencies in the accounting system processes, we discussed the results with the U.S. Section’s Chief Administrative Officer and the Chief of the Financial Services Division to consider what needed to be done to correct the process. To determine that disbursements were exercised by personnel who had delegated authority and were separate from the obligation function, we selected 11 payments made in fiscal years 1997-98 to review for compliance with requirements. These payments were chosen because they were of an international nature requiring more sensitive scrutiny and were processed outside the unit that initiates contractor payments. We reviewed the transactions’ documentation and steps followed to determine who initiated, who reviewed, and who approved the execution of these disbursements. We also documented and reviewed the certifications of delegated monetary authority and obligation authority for personnel associated with these payments. In cases of deficiencies in the process and to understand the consequences of not meeting the “Standards for Internal Controls in the Federal Government,” we discussed the results with cognizant officials to consider what needed to be done to correct the process. To determine the extent to which previously identified oversight weaknesses have been addressed, we obtained and reviewed all available U.S. Section reports and management letters related to external audits conducted from 1992-95 and compiled a list of all deficiencies identified. For each deficiency, we interviewed cognizant U.S. Section officials and their staff members and reviewed U.S. Section policies and procedures to determine if corrective actions had been taken. To the extent that previously identified deficiencies had not been corrected, we obtained the rationale for not doing so and documented whether a plan for corrective action had been determined. To assess the adequacy of internal oversight, we interviewed the Compliance Officer and obtained and reviewed pertinent U.S. Section requirements for compliance reviews. We reviewed the U.S. Section’s Internal Audit Directive, the current Compliance Officer’s activities since joining the U.S. Section, reports and working papers related to completed audits, and the Compliance Officer’s future audit plans. To evaluate the cost-sharing arrangements for joint projects with Mexico, we reviewed the Minutes associated with the five most recent negotiations and compared the agreed-upon terms with both Commission and Department of State documentation that demonstrated the level of involvement and when that involvement occurred. To determine how the terms provided to Mexico on two recent joint projects would affect costs, we performed a present value analysis of the repayment schedule. We also reviewed the appropriate Minutes to determine if there were other agreements reached that were not beneficial to the United States. To determine the adequacy of the U.S. Section’s oversight of contract administration, we reviewed the contract administrator’s performance on five recent contracts awarded by the U.S. Section. We identified the applicable regulations, policies, and procedures that govern U.S. Section contracting processes. We selected the five contracts based on their having been awarded in the 1990s, having exceeded $1 million in value, and having files and key personnel located at the U.S. Section in El Paso, Texas. We assessed the performance of the contract administrator on two operations and maintenance contracts to ensure that certified payments were supported by proper documentation. We also reviewed the contractor’s performance by evaluating the extent to which the contractor corrected known deficiencies. To assess the adequacy of management oversight over U.S. Section activities, we obtained and reviewed policies and procedures associated with oversight of the U.S. Section. We interviewed cognizant officials of the Department of State, EPA, and state and local entities to identify requirements for oversight of the U.S. Section. We analyzed regulations, policies, and procedures provided by these organizations and compared the requirements to the level of oversight achieved. We performed our work between April and July 1998 in accordance with generally accepted government auditing standards. Elliott C. Smith Jeffrey A. Kans John E. Clary James B. Smoak Linda Kay Willard The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the U.S. Section of the International Boundary and Water Commission, focusing on: (1) the sources and uses of the U.S. Section's funds; (2) certain aspects of the U.S. Section's system of accounting and internal controls; (3) the cost-sharing arrangements for joint projects between the United States and Mexico; (4) the administration of U.S Section operations and maintenance contracts; and (5) the extent of oversight over the U.S. Section's programs and operations. GAO noted that: (1) the U.S. Section of the International Boundary and Water Commission has received total funding of approximately $217.9 million over the last 4 years; (2) the funds were from appropriations and grants or payments from other federal agencies and state and local governments; (3) it also received reimbursement from the Mexican government for costs incurred on joint projects; (4) the U.S. Section expended $46.7 million in fiscal year 1997, including $21.9 million from its appropriations and $24.8 million from grants and payments from others; (5) the funds were used for salaries and benefits, administrative costs, operation and maintenance of International Boundary and Water Commission projects, and construction activities; (6) the cost-sharing agreements between the United States and Mexico for two recently completed projects had payment terms that varied from those used on other joint developments; (7) for these two projects, the United States agreed to finance Mexico's share of costs due to Mexico's economic difficulties and in order to cover the cost of meeting environmental standards of the United States, which are higher than those in Mexico; (8) this resulted in $8.6 million worth of increased costs to the United States; (9) the total investment for those two projects--Nogales, Arizona, and South Bay, California--is $321.9 million; (10) there are weaknesses in certain aspects of the U.S. Section's finance and accounting systems; (11) regarding the administration of U.S Section operations and maintenance contracts, required monthly reports on contractor performance were not submitted; (12) oversight of the U.S. Section is minimal; (13) although the Department of State reviews the U.S. Section's budget submission and provides policy guidance to the U.S. Commissioner, the Section is not a constituent part of State and, therefore, the Department does not formally examine the Section's managerial activities because it operates administratively as an independent agency; (14) while the Environmental Protection Agency funds some projects along the southwest border, it only reviews U.S. Section contracts and monitors resulting construction projects where it is a major contributor; and (15) the U.S. Commissioner informed GAO in a July 1998 letter that actions have been taken or are in progress to correct the deficiencies discussed in this report.
Under the overall direction of the Under Secretary of Defense for Acquisition, Technology and Logistics, the military services provide on- base furnished living quarters for over 200,000 unmarried enlisted servicemembers at their permanent duty locations in the United States. Commonly referred to as barracks, housing for unmarried members is often cited by DOD officials as a problem area because many military barracks are old, rundown, and otherwise do not meet contemporary DOD standards for size, privacy, and other amenities designed to enhance the quality of life of unmarried members. Junior unmarried members often share dilapidated barracks rooms with one or two other members and a gang latrine with occupants from several other rooms. Also, about 20,000 junior enlisted members assigned to Navy ships continue to live in cramped onboard quarters even when their ships are in homeport. The living conditions in barracks are far different from an apartment or townhouse with two bedrooms, living area, bath, and full kitchen that is the normal housing standard for junior enlisted married members. The services have established specific goals and milestones for improving the housing provided to unmarried junior enlisted members. First, the services plan to eliminate permanent party barracks—i.e., barracks for servicemembers at their permanent duty locations—with common bath and shower facilities, or “gang latrines,” through barracks replacement or renovation. The Air Force already has achieved this goal and the Army, Navy, and Marine Corps plan to eliminate gang latrines by fiscal years 2008, 2007, and 2005, respectively. Second, the Army and the Navy plan to provide each junior enlisted member in the United States a private sleeping room with a kitchenette and bath shared by one other member— referred to as the 1+1 barracks design standard—by fiscal years 2010 and 2013, respectively. The Air Force, which already provides private sleeping rooms, plans to eliminate its barracks deficit and replace its worst barracks by fiscal year 2009. The Marine Corps, given a permanent waiver from the Secretary of the Navy to use a different barracks design standard, plans to provide barracks with sleeping rooms and baths shared by two junior members by fiscal year 2012. Third, the Navy plans to complete its homeport ashore initiative by fiscal year 2008, which will provide barracks spaces for about 20,000 junior members who are currently required to live aboard their ships while in homeport. To improve barracks conditions and achieve these goals, the services plan to spend about $6 billion over the next 6 years. Appendix II shows photographs of old and new style barracks as well as typical living conditions aboard Navy ships. Service officials state that unmarried junior enlisted servicemembers should live in barracks to help instill service core values, provide for team building and mentoring, and meet operational requirements. However, significant differences exist among the services regarding personnel who are required to live in barracks. More specifically: the Army requires unmarried personnel in pay grades E1 through E6 to live in barracks, the Navy requires unmarried personnel in pay grades E1 through E4 with fewer than 4 years of service to live in barracks, the Air Force requires unmarried personnel in pay grades E1 through E4 to live in barracks, and the Marine Corps requires unmarried personnel in pay grades E1 through E5 to live in barracks. The Military Housing Privatization Initiative, authorized by law on February 10, 1996, provided new authorities that, among other things, allows DOD to provide direct loans, loan guarantees, and other incentives to encourage private developers to construct and operate military family and unaccompanied housing (barracks) either on or off military installations. According to DOD, the initiative was aimed at solving its inadequate housing problem faster and more economically by taking advantage of the private sector’s investment capital and housing construction expertise. With private-sector investment, DOD planned to obtain at least 3 dollars in military housing improvements for each dollar that the government invested, thereby reducing the amount of government funds initially required to revitalize housing and accelerating the elimination of inadequate housing. Although there can be exceptions, DOD’s position is that the government’s estimated total costs for a privatization project also should be equal to or less than the total costs for the same project financed by military construction funding. Servicemembers who live in privatized housing receive a housing allowance to pay for rent and utilities. In fiscal year 1997, the Congress appropriated $5 million for the services to use to initiate privatized barracks projects. However, the Congress rescinded these funds in fiscal year 1999 because the services had developed no plans for privatized barracks. In June 1997, DOD and the Office of Management and Budget agreed to a set of guidelines that would be used as a frame of reference for scoring privatization projects. The guidelines state that if a project provides an occupancy guarantee, then funds for the project must be available and obligated “up front” at the time the government makes the commitment of resources. In other words, if a project provides an occupancy guarantee, then the value of the guarantee—the cumulative value of the rents to be paid for the housing over the entire contract term—must be obligated at the beginning of the project. As a result, DOD officials stated that such a project might not be financially attractive because the amount of appropriated funds required would be approximately equivalent to the military construction funding that would be required to build the barracks. According to DOD officials, this issue has not been a problem for family housing privatization projects because DOD does not provide occupancy guarantees and does not mandatorily assign members to family housing. Military families can choose where to live and the project contracts include provisions for civilians to rent privatized housing if military families choose not to live there. Since 1998, we have issued six reports on DOD’s military housing program—three about the military housing privatization initiative, one about the services’ barracks design standard, one about DOD’s process for determining military housing requirements, and one about the differences among the services concerning who is required to live in barracks. In July 1998, we reported on several concerns related to the new military housing privatization program. These included (1) whether privatization would result in significant cost savings and whether the long contract terms of many projects might result in building housing that will not be needed in the future; (2) whether controls were adequate to protect the government’s interests in the event developers might not operate and maintain the housing as expected; and (3) whether DOD would face certain problems if privatized housing units were not fully used by military members and were subsequently rented to civilians, as the contracts permit. In March 1999, we reported on the status of the services’ implementation of the 1+1 barracks design standard. The report also discussed DOD’s rationale for adopting the standard, the costs of alternatives to the standard, and service views of the impact of the standard from a team-building, individual isolation, or similar perspective. In March 2000, we reported that initial implementation progress for the privatization program was slow, the services’ life-cycle cost analyses provided inaccurate cost comparisons because DOD had not issued standardized guidance for preparing the analyses, and DOD lacked a plan for evaluating the effectiveness of the program. DOD subsequently quickened the pace of family housing privatization, issued standard guidance for privatization life-cycle cost analyses, and developed a program evaluation plan. In August 2001, we reported that despite earlier recommendations, DOD had not implemented a standard process for determining military housing requirements. In that report, we pointed out that the initiative to increase housing allowances heightened the urgency for a consistent process, because the initiative could lessen the demand for military housing by making housing in local communities more affordable. In January 2003, DOD approved a new standard family housing requirements determination process. In June 2002, we noted that by investing about $185 million of military construction funds in the first 10 family housing privatization projects, DOD should obtain housing improvements that would have required about $1.19 billion in military construction funds had only government funds been used. We also reported that privatization projects were not supported by reliable or consistent needs assessments, and the overall requirement for military housing was not well defined. Further, although DOD had included provisions in project contracts designed to protect the government’s interests, our report identified several areas where DOD could further enhance protections to the government. DOD responded by outlining ongoing and planned management actions to address the concerns noted in the report. In January 2003, we reported on the widely varying standards among the services regarding who should live in barracks and the effect this can have on program costs and quality of life. We noted that requiring more personnel (more pay grades) to live in barracks than is justified results in increased barracks program and construction costs and has negative quality-of-life implications because most junior servicemembers would prefer to live off base. We noted that by allowing junior enlisted personnel already living off base with a housing allowance to continue to live off base, the Air Force could reduce planned barracks construction spending by $420 million. Accordingly, we recommended that the rationale behind the services’ barracks occupancy requirements be based, at least in part, on the results of objective, systematic analyses that consider the contemporary needs of junior servicemembers, quality-of-life issues, the services’ mission requirements, and other relevant data that would help provide a basis for the services’ barracks occupancy requirements. While DOD agreed in principle with our recommendation, it reiterated the importance of military judgment in such decisions and left unclear the extent to which it is likely to make changes. While the services have considered barracks privatization over the past several years, they have not yet initiated pilot project proposals to determine the feasibility and cost-effectiveness of private sector financing, ownership, operation, and maintenance of military barracks. According to DOD officials, barracks privatization involves unique challenges compared to family housing privatization. These challenges range from the potentially higher amount of appropriated funds needed to secure a privatization contract (as a result of the services’ requirement that unmarried junior members live in barracks) to the differences in where private developers and the military prefer barracks to be located. Deferring to the individual services, DOD has provided limited centralized direction and focus to help overcome the challenges associated with barracks privatization. Recently, each service has independently given increased attention to developing project proposals, with the Navy hoping to do so by the end of 2003. Still, there are unresolved issues associated with barracks privatization and, without more coordination of activities to address these issues, efforts might be duplicated and the benefits from collaboration might be lost. Compared to family housing privatization, barracks privatization includes unique challenges that, thus far, have prevented the development of pilot project proposals. DOD has actively pursued privatization of military family housing and has awarded contracts to construct or improve about 26,000 family housing units by the end of fiscal year 2002 and has plans to privatize an additional 96,000 units by the beginning of fiscal year 2006. The primary problem with privatizing barracks lies in the services’ mandatory assignment policy for unmarried junior enlisted servicemembers and whether this policy implies that DOD would provide private-sector housing developers with an occupancy guarantee. Mandatory assignment, if viewed as an occupancy guarantee, might make a proposed barracks privatization project financially unattractive because a higher amount of appropriated funds would be needed to secure the contract than would be needed for a similar military construction project. Other challenges are related to barracks locations, unit deployments, and funding for housing allowances. The current policy in each service requires mandatory assignment of unmarried junior members to barracks located on base, provided that space is available. According to DOD officials, most military leaders support this policy because they believe that mandatory assignments provide for military discipline and unit integrity. Mandatory assignments, however, might result in the need for more appropriations—in comparison to military construction financing—to cover the obligations that the Office of Management and Budget determines should be recorded at contract award. This could make a proposed barracks privatization project financially unattractive. The amount of appropriations needed hinges on whether the mandatory assignment policy would provide private-sector housing developers with a DOD guarantee of occupancy. Because there have been no barracks privatization project proposals to date, it is unclear whether the services’ mandatory barracks assignment policies for junior members might be viewed as an occupancy guarantee. Office of Management and Budget officials stated that having a mandatory assignment policy alone would not necessarily guarantee that the rent paid to the developer over the life of the project would have to be scored up front. However, if the privatization contract specifically stated that mandatory assignment would occur, the officials stated that the office probably would view this as an occupancy guarantee and the project’s projected rent would be scored up front. As with family housing projects, Office of Management and Budget officials stated that the scoring of a barracks project depends on the details and circumstances involved in a proposed project and the associated risk to the government. Key issues that might be considered include whether the project allows the private developer enough autonomy to manage the project without significant military control and whether the contract includes provisions for civilians to rent vacant barracks spaces in the event of reduced government demand. Obviously, such issues present problems for the services—specifically, the willingness of the services to relinquish their control of barracks and allow civilians to occupy vacant barracks spaces. With a specific barracks privatization proposal, the Office of Management and Budget officials stated they would work with DOD to address the associated scoring questions. Although the potentially high amount of appropriated funds needed to secure a contract appears to be the most significant challenge to barracks privatization, there are other challenges as noted below. Barracks location. According to DOD officials, private developers have indicated that they would prefer that privatized barracks be located off base or along an installation’s boundary and be severable from the installation. Developers would then have greater flexibility in renting the units to civilians in the event of reduced government demand. However, the services do not want barracks located off-base or near installations’ perimeter fences largely for force protection reasons and, currently, most existing barracks are not located along installation boundaries. Deployments. In the event of unit deployments, many servicemembers would not be in the barracks and possibly entire buildings could be empty for months. As a result, the developer’s normal rental income could be reduced or eliminated even though the developer would still need to pay for expenses such as mortgage payments and operations and maintenance costs. This is less of a problem in privatized family housing because family members normally continue to occupy the housing and pay rent if the servicemember deploys. Funding for housing allowances. Service officials stated that identifying and shifting funds to pay housing allowances to servicemembers living in privatized barracks could be an administrative problem. This is less of a problem with privatized family housing because military family housing has a separate operations and maintenance budget account. When a private developer takes over existing military family housing, funds from the family housing operations and maintenance account can be shifted to help pay for housing allowances used to pay rent for the families living in the housing. However, barracks operations and maintenance is not funded by a similar separate account. Instead, barracks operations and maintenance funds are included in each installation’s overall base operating budget. According to service officials, it is more difficult to identify, break out, and shift barracks funding to the personnel accounts to pay housing allowances for a privatized barracks project. With its attention largely concentrated on initiating and managing privatization of military family housing, DOD has provided limited centralized direction and focus to help the services overcome the challenges associated with barracks privatization and proceed with pilot project proposals. Also, in August 1998, 2 years after the military housing privatization legislation was enacted, DOD shifted primary responsibility for implementing the privatization program to the individual services. Since that time, the services have independently studied the barracks privatization concept but have not developed actual project proposals. More recently, the services have given increased attention to exploring barracks privatization, but their efforts continue to be independent and non-coordinated. The status of barracks privatization in each service follows. While no service has yet initiated a barracks privatization project, the Navy and the Marine Corps currently appear to be the most active among the services in examining its potential use. Navy officials stated that they believe barracks privatization offers an opportunity for the Navy to more quickly meet its barracks improvement goals, including the goal of providing barracks space for all junior sailors currently required to live on their ships even while in homeport. In order for barracks privatization to be feasible, Navy officials believed that the Navy needed additional authorities not contained in the Military Housing Privatization Initiative legislation. Specifically, Navy officials believed that existing housing allowance rates provided more money than would be needed to develop a privatized barracks project. The housing allowance rate for unmarried junior members is targeted to cover the costs of a one-bedroom apartment in the civilian community. Yet, the barracks occupancy standard is based on a lesser standard—the modern 1+1 barracks design standard where two members share a module consisting of two small bedrooms with a kitchenette and bath. As a result, Navy officials believe the current housing allowance could provide more money than would be needed to pay rent for a similar design standard in a privatized barracks, and the rental income received by the private-sector developer would be more than is needed to finance the construction and management of the project. To address this situation, the Bob Stump National Defense Authorization Act for Fiscal Year 2003 provided the Navy with specific legislative authority to undertake three pilot projects to privatize barracks. According to Navy officials, the legislation will allow the Navy to pay occupants’ allowances in the amounts needed to provide the rental income to support the privatized barracks projects and will allow junior sailors on ships to be assigned to privatized barracks. With this authority, Navy and Marine Corps officials stated that they plan to develop specific proposals for privatization. Candidate installations for barracks privatization include the Naval Station Norfolk, Virginia; the Naval Station San Diego, California; and the Marine Corps Base Camp Pendleton, California. Although remaining challenges, such as those noted above, must be addressed, Navy officials hope that specific proposals will be developed by the end of calendar year 2003. In May 1997, the Air Force issued the results of a barracks privatization feasibility study. The study concluded that privatization was feasible and recommended that the Air Force pursue development of a barracks privatization project at one base to further define the concept. However, the study, which was performed prior to issuance of the budgetary scoring guidelines for privatization projects, stated that occupancy guarantees would be provided in order to facilitate private financing. According to Air Force officials, the study recommendation was not implemented because of the costs associated with occupancy guarantees and the other challenges associated with barracks privatization. More recently, however, the Air Force has again begun to explore the issue. In August 2002, an Air Force team was formed to establish a baseline for an Air Force barracks privatization program including the development of policy and guidance. Air Force officials also stated that Air Force major commands have been asked to identify potential privatization candidates. One potential candidate identified was Elmendorf Air Force Base, Alaska, where a family housing privatization project is already underway. However, officials stated that they do not expect any privatized barracks proposals in the near future and that they planned to monitor the Navy’s progress under its pilot program. Army officials stated that they have explored the concept of barracks privatization but that they have made relatively little progress toward reaching a consensus that the concept should be pursued. They also stated that they were not optimistic that the many challenges facing barracks privatization could be overcome and did not expect any project proposals in the near future. Nevertheless, the Army is continuing to review the issue. For example, in an April 2002 memorandum, the Army Assistant Secretary for Installations and Environment stated that the time was right to pursue the issue and requested the support of the Army’s Training and Doctrine Command and the Army’s Forces Command in formal studies of barracks privatization. At the time of our review, no studies had been completed. In addition, Fort Lewis, Washington, has a barracks privatization study underway that is expected to be completed in 2003. To the extent the services continue to rely on government built and operated barracks on military installations, opportunities exist to reduce costs of constructing those barracks through adoption of residential construction practices. In the past, DOD policies generally required that traditional barracks construction practices use commercial-type construction including use of steel frame, concrete, and cement block. Similar multi-unit residential housing in the private sector, such as apartments, college dormitories, and extended stay hotels, normally use residential construction practices that include the use of wood frame construction. Compared to steel frame, concrete, and cement block construction, Army analyses show that residential construction practices could reduce typical barracks construction costs by 23 percent or more. DOD policies now generally allow use of residential construction practices. However, some barriers still exist to DOD’s adoption of these cost-reducing practices as a normal way of doing business, including concern about durability and unanswered questions about the ability of wood-frame barracks to meet all antiterrorism force protection requirements. Concerned with the high construction costs of barracks built to the 1+1 design standard, the Army began to search for savings opportunities and concluded that using residential construction practices to build barracks would cost less than using traditional construction practices. In June 2000, the Army revised its barracks construction guidance to permit Army construction projects to be of any construction type. Subsequently, the Army began a pilot barracks project using residential construction practices at Fort Meade, Maryland. As the Army began building new barracks in accordance with the 1+1 barracks design standard adopted in 1995, Army officials became concerned with the high construction costs of these barracks. To explore reasons for the high costs and opportunities for savings, the Army Corps of Engineers performed a study in 1996 that compared the construction costs of three typical Army 1+1 barracks with the construction costs of a similar private sector multi-unit project—specifically a national brand, all suites, extended stay hotel. After making adjustments to account for differences in geographic location and dates of construction, the Army Corps of Engineers found significant cost differences between the projects as shown in table 1. The extended stay hotel provided each occupant with more amenities and space than the Army barracks at a construction cost per occupant of $14,100, or 29 percent, less than the barracks’ average construction cost per occupant. The Army Corps of Engineers determined that although many factors accounted for the cost difference between the projects, the primary reason was the type of construction used to build the projects. The barracks were constructed in accordance with Uniform Building Code type I/II (commercial) standards that call for non-combustible construction built from concrete, masonry, and/or steel. The private extended stay hotel was constructed in accordance with Uniform Building Code type V (residential) standards that permit use of any building material allowed by the code, including wood. The Army Corps of Engineers’ data showed that if the barracks had been built using residential construction practices instead of traditional barracks construction practices, the Army’s average construction cost per occupant would have been about $37,500, a reduction of about $11,200, or 23 percent, per occupant. This study did not address differences in the barracks’ total costs—i.e., construction costs and operations and maintenance costs—over their lifetimes. However, subsequent Army analyses indicate that a barrack’s total costs over its lifetime would be less if constructed with residential practices because of its lower initial construction costs and comparable operations and maintenance costs for many building components. (See app. III for additional details on the Army’s analyses.) A subsequent Army study also concluded that the materials and methods traditionally used to construct government-owned barracks were more costly than the materials and methods normally used to construct similar multi-unit residential buildings in the private sector. In a joint February 2001 report, the Army’s Assistant Chief of Staff for Installation Management and the Army Corps of Engineers concluded that using residential construction practices, similar to the practices used to build apartment buildings, could achieve considerable cost reductions without adversely impacting barracks’ durability or maintainability. The report included an additional example comparing barracks built using traditional construction practices with a residential condominium built using residential construction practices. Specifically, the report cited an Army 1+1 barracks built in fiscal year 2000 at Fort Leavenworth, Kansas. Each two-bedroom, bath, and kitchenette module had 506 square feet and cost $193,000. During this time frame, the construction cost of a 1,500 square foot residential unit with two bedrooms, two baths, full kitchen, living room, laundry room, and balcony in a new private condominium complex in Maryland was $180,000. Although the condominium unit was almost three times larger than the barracks module, it cost $13,000 less. The Army revised its barracks construction guidance in recent years to permit construction projects to be of any construction type, largely in response to its analyses. When building barracks, the Army had been following guidance in Military Handbook 1008C, which provides direction on the design and construction of DOD facilities. The handbook stated that construction of new buildings should be limited to use of traditional barracks construction practices. However, in June 2000, the Army Corps of Engineers issued guidance that authorized Army construction projects to be of any construction type as long as they complied with the Uniform Building Code requirements for the construction type used. Further, in a July 2002 memorandum, the Army Vice Chief of Staff stated that use of less restrictive residential practices in barracks construction would improve soldier quality of life and provide better value to the Army. An enclosure to this memorandum stated that, although Army barracks traditionally have been designed in many cases to exceed industry codes and standards, such an approach is not in the Army’s best economic interests. A 1+1 barracks design project currently under construction at Fort Meade, Maryland, is the Army’s first barracks to be built using residential construction practices. According to Army officials, the project calls for eight new three-story barracks buildings with a total of 576 private sleeping rooms. The project’s initial design assumed use of traditional construction practices. However, on the basis of this design, the Army Corps of Engineers estimated that the project would cost $48 million— about $11 million more than had been approved for the project. In an effort to reduce construction costs, the Army decided to redesign the project using multi-unit residential 1-hour fire resistive construction practices. After the redesign and solicitation process, the project was awarded for about $31 million. With the project 83 percent complete in January 2003, the Army Corps of Engineers estimated that the final project cost—including supervision and overhead costs and costs of changes and enhancements to the contracted design—would be about $39 million. In addition, the project’s estimated completion date was about 8 months ahead of the contracted completion date of January 2004. In January 2003, we visited the Fort Meade barracks construction site. Visually, we noted few differences in the appearance of these barracks compared to traditional barracks. Figure 1 shows photographs of the Fort Meade barracks project contrasted with a traditionally constructed barracks at Langley Air Force Base, Virginia. For a comparison with the Fort Meade project, we asked the Army for cost data on two 1+1 barracks projects under construction at Fort Bragg, North Carolina. One project is building 960 rooms using traditional non- combustible construction practices and the other project is building 608 rooms using traditional 1-hour fire resistive construction practices. Compared to the Fort Bragg projects, it appears that use of the residential construction practices in the Fort Meade project will result in considerable cost reductions—from $12,600 to $31,800 per occupant (see table 2). There are barriers to DOD’s widespread adoption of residential construction practices as a normal way of doing business. Because Army studies and the pilot project at Fort Meade indicate the potential to reduce some costs by using residential construction practices, it would seem that the services would be eager to adopt these practices for all future barracks construction projects. However, this has not been the case due to concerns about barracks durability and concerns related to antiterrorism force protection issues. According to Army officials, the services have been reluctant to change construction practices because of the concern that switching to residential construction practices would result in barracks that are less attractive and less durable. However, the officials noted that the exterior appearance of barracks constructed with residential and traditional practices normally would be the same. Also, Army analyses indicate that there is little difference in durability with each type of construction and a barrack’s total costs over its lifetime would be less if constructed with residential practices because of its lower initial construction costs and comparable operations and maintenance costs for many building components. (See app. III for additional details.) Still, the officials stated that the idea of switching construction practices continues to face resistance. Because of this, even the Army had no definite plans, as of February 2003, for additional barracks construction using residential construction practices. The Air Force also had no plans to use residential construction practices for its barracks projects. The Navy, which has completed two barracks projects using residential construction practices, has no additional barracks projects underway or planned using these practices. Another barrier to widespread adoption of residential construction practices for barracks relates to unresolved questions on whether use of these practices would result in barracks that fully complied with new antiterrorism guidance for force protection. In July 2002, DOD finalized guidance requiring military components to adhere to common criteria and minimum construction standards to mitigate antiterrorism vulnerabilities and terrorist threats. The standards seek to minimize the likelihood of mass casualties from terrorist attacks against DOD personnel in the buildings where they work and live. As applied to barracks construction, two standards in the antiterrorism force protection guidance are particularly important—standoff distance and prevention of building collapse. Standoff distance refers to the minimum distance that buildings should be situated from roads, parking lots, trash containers, and an installation’s perimeter. According to the guidance, the easiest and least costly way to achieve appropriate levels of protection against terrorist threats is to incorporate sufficient standoff distance into project designs. In situations where the standoff distance standards cannot be achieved because land is unavailable, the guidance calls for building hardening or other techniques to mitigate possible blast effects. According to Army officials, because most barracks projects in the United States could be situated to meet required standoff distances, use of residential construction practices and compliance with this standard would not be a problem in most instances. Navy officials, however, stated that enough land to meet required standoff distances was not available at many of its installations. The DOD standard for preventing building collapse applies to buildings of three or more stories and requires that they be designed with provisions that permit the structure to sustain local damage without the entire building collapsing. According to Army officials, questions remain as to whether barracks built using residential practices would comply with the collapse standard. They stated that the primary issue is lack of engineering data. Most available building collapse information addresses structural systems typical of taller buildings that were not built using residential construction practices. Army officials also stated that complying with the collapse standard using residential barracks construction practices might not be a problem or might be solved with inexpensive adjustments to construction techniques. Designers do not have sufficient data on exactly what, if anything, needs to be done to ensure compliance with the standard when using residential construction practices. At the same time, some Army officials also questioned whether the collapse standard should apply to low-rise three-story barracks buildings. They noted that industry design standards usually make a distinction in structural requirements at four stories and above—not at three stories and above as required by the collapse standard. They further noted that today’s 1+1 barracks design standard provides relatively low occupancy densities that are more similar to family housing which is exempt from the force protection requirements as long as a family housing building contains no more than 12 family units. The services could minimize housing costs by ensuring full use of existing barracks space. Having unused government-owned barracks spaces and paying housing allowances at the same time wastes available resources. Air Force and Army barracks instructions, however, do not require installations to use all vacant space before authorizing housing allowances for junior members to live off base. Our review, as well as previous reviews by military service audit groups, found that the lenient barracks utilization guidance, and in some cases noncompliance with the guidance, resulted in installations paying housing allowances when barracks vacancies existed. The services could also reduce costs by identifying and eliminating excess barracks infrastructure if they were to change their barracks occupancy requirements and permit more junior members to live off base. Army instructions allow its installations to authorize junior members to live off base with a housing allowance when barracks occupancy reaches 95 percent. Air Force instructions only require that 90 percent of an installation’s available barracks spaces be used before authorizing junior members to live off base with a housing allowance. Prior to June 1998, the Air Force required 95-percent occupancy. Air Force officials stated that the change was made to facilitate flexibility and to help maintain unit integrity in barracks assignments. To put these instructions in perspective, such policies, if practiced in the private sector, would be the equivalent of the owner of a private apartment complex turning away prospective tenants even though 5 to 10 percent of the apartments were vacant—an action not likely to happen if the owner is concerned about costs and revenues. Further, allowing 5 to 10 percent of barracks spaces to go unused appears contrary to the services’ policies requiring that all unmarried junior members live in the barracks as long as space is available. In contrast to Army and Air Force instructions, Navy and Marine Corps instructions state that maximum practical occupancy should be achieved before junior members are authorized to live off base with a housing allowance. The Navy instruction specifically states that barracks utilization should routinely approach 100 percent. In view of the differences in the services’ barracks utilization guidance, we attempted to review barracks utilization and payment of housing allowances for unmarried junior members in each of the services. However, our analysis was limited to the Air Force and the Marine Corps because only those services require their installations to collect and centrally report barracks utilization data and the number of members authorized to live off base. The Navy requires barracks utilization reports, but the reports do not include the number of members authorized to live off base. Army officials stated that although utilization data is maintained by each installation, they had eliminated central reporting requirements years ago in order to reduce paperwork costs. With centralized data only available from the Air Force and the Marine Corps, we focused our review on an analysis of that information. The Air Force reported an inventory of about 43,400 adequate permanent party barracks rooms in the United States as of September 30, 2002. Table 3 shows that on September 30, 2002, about 4,700 of these rooms were diverted from normal use for maintenance and other reasons. Of the remaining rooms, about 35,300, or 91 percent, were occupied and about 3,400 rooms, or 9 percent, were vacant. Among major Air Force installations, the occupancy rates for the available barracks rooms ranged from 100 percent at Minot Air Force Base, North Dakota, to 82 percent at Tinker Air Force Base, Oklahoma. The Air Force also reported that as of September 30, 2002, it had authorized about 24,100 unmarried junior servicemembers in pay grades E1 through E4 to live off base with a housing allowance. We analyzed this data to estimate the housing allowance funds that the Air Force could potentially have prevented if members living off base had been assigned to the vacant barracks rooms. To do this, we compared—on an installation- by-installation basis—the number of junior servicemembers living off base with a housing allowance to the number of barracks vacancies on September 30, 2002. Our analysis showed that the vacant barracks spaces could have accommodated about 2,900 of the junior members who were living off base—suggesting a practice at variance with the Air Force’s stated policy of requiring E1 through E4 to live on base in barracks. Had these members been assigned to the barracks, the Air Force potentially could have reduced its annual housing allowance costs by about $20 million. Because the data used in this analysis reflected barracks use on a single date, September 30, 2002, our analysis reflects results as of this single date. Also, because barracks occupancy can change daily, results would have differed if utilization data on another date had been used or if data had been available to show daily utilization over a period of time. Although Air Force instructions require that 90 percent of an installation’s available barracks spaces be used before authorizing junior members to live off base, some Air Force installations apparently were not in compliance with this guidance. For example, data for Kirtland Air Force Base, New Mexico, indicated an 85-percent occupancy rate with 105 vacancies and 392 junior members living off base with a housing allowance. Similarly, data for McChord Air Force Base, Washington, indicated an 86-percent occupancy rate with 101 vacancies, and 118 junior members living off base with a housing allowance. Air Force officials noted that installation occupancy rates are reported only twice a year and represent a snapshot in time. Thus, to determine whether installations reporting less than 90-percent occupancy were not complying with policy would require a detailed installation level review of occupancy rates over a period of time and the reasons why members living off base were allowed to do so. Air Force officials also noted that Air Force commands are reminded on a regular basis of the importance of complying with utilization policy and making full use of their barracks. The Marine Corps data as of September 30, 2002, showed that barracks at most Marine Corps installations were fully used. Of the few major installations that reported less than 100-percent utilization, only one also reported unmarried junior enlisted members living off base with a housing allowance. In this instance, however, the installation reported only three junior members with a housing allowance. Previous reports from service audit groups also have noted that noncompliance with existing guidance has resulted in installations paying housing allowances when barracks vacancies existed. For example, in a February 1999 report on barracks management at Langley Air Force Base, Virginia, the Air Force Audit Agency stated that housing managers did not require individual barracks to meet the occupancy goal before authorizing members to live off base. The report also stated that maintaining barracks occupancy rates above the Air Force goal would provide direct savings to the Air Force budget. The Army Audit Agency reported in January 1997 that Fort Benning, Georgia, had authorized members to live off base even though barracks utilization was below the Army goal of 95 percent. The report stated that the unnecessary authorizations were issued because Fort Benning decentralized barracks management to the unit level and did not make sure that each unit fully used its barracks before authorizing members to live off base with a housing allowance. While it is important to make full use of existing barracks space, it is also important that the services maintain an inventory of barracks spaces only in the numbers actually required. In our January 2003 report, we discussed the widely varying standards among the services regarding who should live in barracks and the effect this can have on program costs and quality of life and recommended that the services review the rationale behind their barracks occupancy requirements. DOD has left unclear the extent to which it is likely to make changes in its barracks occupancy requirements. However, if the services were to change their barracks occupancy requirements and permit more junior members to live off base with a housing allowance, then the services could reduce housing costs by identifying and eliminating excess barracks infrastructure. To use the Air Force case as an illustration, instead of bringing junior members back on base to fill up barracks vacancies, the Air Force could officially decide that many of these members should be allowed to continue to live off base. This decision would reduce barracks needs and the Air Force could then consider vacant barracks spaces as excess infrastructure that could be eliminated to reduce costs. DOD and the services have not fully explored barracks privatization to determine whether the concept could provide a better economic value to the government than the use of military construction financing. Although the services have separately studied the issues and unique challenges associated with barracks privatization, DOD has largely concentrated on family housing privatization and not on promoting a coordinated, focused effort to address the challenges and develop pilot project proposals to determine the overall feasibility and merits of barracks privatization. Without more coordination of activities to address the challenges associated with barracks privatization, efforts might be duplicated and potential opportunities to optimize lessons learned might be lost. For several reasons, DOD and the military services have not taken advantage of opportunities to potentially reduce their housing costs for unmarried servicemembers through use of residential construction practices in government-owned barracks construction and better utilization of existing government-owned barracks. First, widespread adoption of residential construction practices in building government- owned barracks has been hampered because of concerns about barracks durability and unanswered questions about the ability of wood-frame barracks to meet all antiterrorism force protection requirements. Without engineering studies to resolve these questions and, if appropriate, adoption of residential construction practices, the services could be spending more than is needed on barracks construction. Second, lenient barracks utilization guidance—which in some cases does not require full use of existing government-owned barracks before authorizing housing allowances for junior members to live off base—and limited enforcement of existing guidance have led in some cases to the routine acceptance of less than maximum use of barracks and the payment of housing allowances when vacancies exist. The establishment of and compliance with guidance that requires maximum use of required existing barracks— specifically, utilization that routinely approaches 100 percent before unmarried junior members are authorized housing allowances—could result in reducing the services’ housing costs for junior members. It is also important that the services maintain an inventory of barracks spaces only in the numbers actually required. If the services were to change their barracks occupancy requirements based on their review of the requirements’ rationale and permit more junior members to live off base, then they could also reduce costs by identifying and eliminating barracks space that is no longer needed. To capitalize on opportunities for reducing housing costs for unmarried servicemembers, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to: Promote a coordinated, focused effort among the military services to determine the feasibility and cost-effectiveness of barracks privatization by addressing the associated challenges and facilitating the development of pilot project proposals. This effort should support the Navy’s use of the pilot housing privatization authority provided to the Navy in the Fiscal Year 2003 Bob Stump National Defense Authorization Act, with lessons learned applied to the other services’ efforts. Direct the Army Corps of Engineers and the Naval Facilities Engineering Command to jointly undertake an engineering study to resolve questions about use of residential construction practices for barracks and compliance with antiterrorism force protection requirements. Direct the military services to adopt residential construction practices for future barracks construction projects to the maximum extent practical, providing that the engineering studies show that barracks built with residential construction practices can economically meet all force protection requirements. Issue guidance directing that the services maximize use of required existing barracks space—defined as utilization that routinely approaches 100 percent—before authorizing unmarried junior members to live off base with a housing allowance. Direct the military services to identify and eliminate excess barracks infrastructure if, by reviewing the rationale behind their barracks occupancy requirements, they determine that more unmarried junior members should be permitted to live off base with a housing allowance. In commenting on a draft of this report, the Director, Competitive Sourcing and Privatization, fully agreed with four and partially agreed with one of our recommendations and indicated that actions were underway or planned to deal with most of them. DOD stated that it was supportive of initiatives to energize barracks privatization and planned to build on lessons learned from the Navy’s pilot project to encourage barracks privatization. DOD also stated that it supports the study and use of commercial and residential construction standards and use of the privatization authorities to improve the living conditions for unaccompanied members as quickly as possible. In addition, it stated that the Army Corps of Engineers has already begun a study of residential construction methods and compliance with antiterrorism force protection requirements using the Fort Meade barracks project as a basis for the study. Further, as the first step to maximizing use of existing barracks, programming for new barracks, and divesting of excess infrastructure, DOD stated that the actual need for barracks space must be determined by establishing a common requirements process consistent with individual service missions. DOD partially agreed with our recommendation to issue guidance directing the services to maximize use of required existing barracks space. DOD stated that barracks requirements must first be determined before issuing such guidance. We agree that the services should maintain an inventory of barracks spaces only in the numbers actually required and that if the services were to reduce their barracks occupancy requirements and permit more junior members to live off base, then they could reduce costs by identifying and eliminating barracks space that is no longer needed, as DOD suggests in its comments. However, on the basis of their current barracks occupancy requirements and construction plans, the services have individually determined that most of their existing barracks spaces are needed. Unless stated barracks occupancy requirements are reduced, we believe that these spaces should be fully used before authorizing housing allowances for junior members to live off base and that additional DOD guidance is needed now to help achieve this. To do otherwise, results in having unused government-owned barracks spaces and paying housing allowances at the same time, which wastes available resources. DOD’s comments are included in appendix IV of this report. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to the appropriate congressional committees, and it will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this letter, please contact me at (202) 512-8412, or my Assistant Director, Mark Little, at (202) 512-4673. Gary Phillips, Jim Ellis, Sharon Reid, Harry Knobler, and R.K. Wild were major contributors to this report. Our review of DOD’s housing program for unmarried servicemembers focused on enlisted members at their permanent assignment locations in the United States—after the members completed recruit and advanced individual training. We interviewed DOD and service headquarters housing officials; reviewed applicable DOD and military service policies and procedures; reviewed barracks improvement plans and milestones; and visited selected installations to view barracks conditions and discuss local management practices. Specifically, we visited the Naval Station Norfolk, Virginia; Langley Air Force Base, Virginia; Fort Eustis, Virginia; and Marine Corps Base Quantico, Virginia. To examine opportunities for reducing costs through barracks privatization and the barriers to developing barracks privatization project proposals, we examined the laws authorizing and funding the program, reviewed DOD’s experiences with family housing privatization, interviewed DOD and service officials, and reviewed available documentation to identify past efforts and current plans related to barracks privatization. We also discussed privatization plans and challenges with local officials at the installations visited and discussed budget scoring issues for barracks privatization with officials at the Office of Management and Budget. To examine opportunities for reducing costs through adoption of residential construction practices for barracks construction, we reviewed Army studies and analyses in this area. We also obtained and compared selected cost information for barracks constructed using traditional practices and for barracks constructed using residential practices. We did not attempt to validate this cost information. Further, we interviewed service officials to discuss the services’ use of residential construction practices for barracks and to determine the reasons why the concept has not been widely adopted. We also visited Fort Meade, Maryland, to observe construction progress on the Army’s first barracks project that has incorporated residential construction practices. To examine opportunities for reducing costs through better utilization of barracks, we reviewed the services’ policies and instructions related to barracks use, occupancy goals, and justification for authorizing unmarried junior members to live off base with a housing allowance. To determine whether greater use of barracks could reduce housing allowance costs, we obtained and analyzed readily available data showing the number of barracks vacancies and the number of junior servicemembers living off base with a housing allowance on September 30, 2002. To estimate the potential cost reductions, we multiplied the number of members who could have been assigned to the barracks vacancies by the national average basic allowance for housing rate. We also reviewed prior audit reports related to barracks utilization from military service audit organizations. We conducted our review between May 2002 and April 2003 in accordance with generally accepted government auditing standards. The military services are replacing old barracks, where junior members often share a sleeping room with one or two others and share a gang latrine with occupants from several other rooms, with new barracks, where in most cases junior members have a private sleeping room and share a bath and kitchenette with one other member. The Navy’s “homeport ashore” initiative intends to provide barracks spaces on base for junior members who are currently required to live in cramped quarters aboard their ships even when their ships are in homeport. During our visits to installations, we observed a variety of barracks in conditions ranging from outdated to newly constructed. Figure 2 shows photographs of typical old and new style barracks. At the older barracks, we saw cramped living quarters, peeling paint, damaged walls and ceilings, and poor heating, ventilation, and air conditioning systems. On board ship, the space was cramped. Some examples of the living quarters and gang latrines in old style barracks and aboard ship are shown in figure 3. In contrast, we observed several newly constructed barracks that provided living quarters using the 1+1 barracks design standard. Some examples of the bedrooms, shared baths, and shared kitchenettes are shown in figure 4. Compared to traditional steel frame, concrete, and cement block construction, Army analyses show that use of residential construction can reduce typical barracks construction costs by 23 percent or more. Army analyses also indicate that a barrack’s total costs over its lifetime—i.e., initial construction costs and annual operations and maintenance costs— would be less if constructed with residential practices. The lower “life- cycle costs” from use of residential construction practices results not only from the lower initial construction costs, but also from comparable operations and maintenance costs for many building components regardless of the type of construction practices used—traditional or residential. Use of residential construction practices to build barracks could also reduce renovation costs and result in additional cost reductions in labor construction costs. Army officials noted that actual differences in barracks operations and maintenance costs are dependent on the particular building designs. In general, however, the officials stated that there should be no significant operations and maintenance cost differences with use of either traditional or residential construction practices in many architectural features, such as exterior and interior finishes, electrical and plumbing systems, doors and hardware, and windows. For other building components, such as roofs and heating, ventilation, and air conditioning systems, operations and maintenance costs could be lower with traditional construction. But, because of the lower initial construction costs, use of residential construction practices for such components could still result in lower costs over the life of the barracks. For example, the roof system for many traditionally constructed barracks consists of metal and concrete that would normally last for the entire life of the barracks. When using residential construction practices, the barracks roof system would normally consist of heavy-duty shingles that would require replacement during the life of the barracks. Yet, Army analyses show that a shingle roof system would have lower life-cycle costs than a metal and concrete roof system because of its lower initial construction costs. Army officials also noted that use of residential construction practices for barracks would result in buildings that could be renovated at lower costs than traditionally constructed barracks. They stated that many military buildings, including barracks, become functionally obsolete in 25 years or less because of changed missions or design standards, such as the change in the barracks design standard in 1995 from multi-person to private sleeping rooms. The costs to renovate and reconfigure a traditionally constructed barracks with masonry interior walls would normally be greater than the costs to renovate and reconfigure a barracks built with residential construction practices using wood frame and sheetrock walls. According to Army officials, use of residential construction practices to build barracks could result in additional reductions in construction labor costs. Federal statutes, commonly referred to as the Davis-Bacon Act and related legislation, require that workers on most government construction projects be paid according to the prevailing local wage rates as determined by the Department of Labor. However, there are different prevailing local wage rate scales depending on the type of construction being performed. Traditionally, barracks construction has been considered commercial construction and the commercial wage rate scale has been used for these projects. In contrast, military family housing construction has been considered residential construction and the residential wage rate scale has been used for these projects. According to Army officials, the residential wage rate scale is normally 5 to 30 percent less than the commercial wage rate scale. Thus, using residential construction practices in a low-rise (three stories or less) barracks construction project and application of the residential, instead of commercial, wage rate scale, could result in additional reductions in barracks construction costs.
Each year, the Department of Defense (DOD) spends billions of dollars to house unmarried junior enlisted servicemembers, primarily in military barracks. Over the next several years, the Army, Navy, and Air Force plan to spend about $6 billion to eliminate barracks with multi-person bathroom facilities and provide private sleeping rooms for all permanent party members. Given the cost of the program, GAO looked at (1) the status of efforts to examine the potential for private sector financing, ownership, operation, and maintenance of military barracks; (2) the opportunity to reduce the construction costs of barracks through widespread use of residential construction practices; and (3) whether opportunities exist to make better use of existing barracks. GAO found three areas where DOD could potentially reduce costs in its unmarried servicemember housing program. DOD and the services have not determined whether "privatization," or private sector financing, ownership, operation, and maintenance of military barracks is feasible and cost-effective. Barracks privatization involves a number of unique challenges ranging from the funding of privatization contracts to the location of privatized barracks. Recently, each service has independently given increased attention to developing privatization proposals. A collaborative, rather than independent, approach could minimize duplication and optimize lessons learned. DOD could reduce the construction costs of government-owned barracks through the widespread use of residential construction practices rather than traditional steel frame, concrete, and cement block. The Army estimated that residential type construction could reduce barracks construction costs by 23 percent or more. However, concerns about barracks durability and unanswered engineering questions have prevented widespread use of these practices. DOD's full use of required existing barracks space could reduce the cost of housing allowances paid to unmarried junior members to live off base in local communities. GAO found that the services have authorized housing allowances for unmarried members to live off base even when existing barracks space was available. This occurred because of lenient barracks utilization guidance, which in some cases does not require full use of existing barracks, and possible noncompliance with guidance. The Air Force could have potentially reduced annual housing allowances by about $20 million in fiscal year 2002 by fully using available barracks space.
Since the early 1990s, the explosion in computer interconnectivity, most notably growth in the use of the Internet, has revolutionized the way organizations conduct business, making communications faster and access to data easier. However, this widespread interconnectivity has increased the risks to computer systems and, more importantly, to the critical operations and infrastructures that these systems support, such as telecommunications, power distribution, national defense, and essential government services. Malicious attacks, in particular, are a growing concern. The National Security Agency has determined that foreign governments already have or are developing computer attack capabilities, and that potential adversaries are developing a body of knowledge about U.S. systems and methods to attack them. In addition, reported incidents have increased dramatically in recent years. Accordingly, there is a growing risk that terrorists or hostile foreign states could severely damage or disrupt national defense or vital public operations through computer-based attacks on the nation’s critical infrastructures. Since 1997, in reports to the Congress, we have designated information security as a governmentwide high-risk area. Our most recent report in this regard, issued in January, noted that, while efforts to address the problem have gained momentum, federal assets and operations continued to be highly vulnerable to computer-based attacks. To develop a strategy to reduce such risks, in 1996, the President established a Commission on Critical Infrastructure Protection. In October 1997, the commission issued its report, stating that a comprehensive effort was needed, including “a system of surveillance, assessment, early warning, and response mechanisms to mitigate the potential for cyber threats.” The report said that the Federal Bureau of Investigation (FBI) had already begun to develop warning and threat analysis capabilities and urged it to continue in these efforts. In addition, the report noted that the FBI could serve as the preliminary national warning center for infrastructure attacks and provide law enforcement, intelligence, and other information needed to ensure the highest quality analysis possible. In May 1998, PDD 63 was issued in response to the commission’s report. The directive called for a range of actions intended to improve federal agency security programs, establish a partnership between the government and the private sector, and improve the nation’s ability to detect and respond to serious computer-based attacks. The directive established a National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism under the Assistant to the President for National Security Affairs. Further, the directive designated lead agencies to work with private-sector entities in each of eight industry sectors and five special functions. For example, the Department of the Treasury is responsible for working with the banking and finance sector, and the Department of Energy is responsible for working with the electric power industry. PDD 63 also authorized the FBI to expand its NIPC, which had been originally established in February 1998. The directive specifically assigned the NIPC, within the FBI, responsibility for providing comprehensive analyses on threats, vulnerabilities, and attacks; issuing timely warnings on threats and attacks; facilitating and coordinating the government’s response to cyber incidents; providing law enforcement investigation and response; monitoring reconstitution of minimum required capabilities after an infrastructure attack; and promoting outreach and information sharing. PDD 63 assigns the NIPC responsibility for developing analytical capabilities to provide comprehensive information on changes in threat conditions and newly identified system vulnerabilities as well as timely warnings of potential and actual attacks. This responsibility requires obtaining and analyzing intelligence, law enforcement, and other information to identify patterns that may signal that an attack is underway or imminent. Since its establishment in 1998, the NIPC has issued a variety of analytical products, most of which have been tactical analyses pertaining to individual incidents. These analyses have included (1) situation reports related to law enforcement investigations, including denial-of-service attacks that affected numerous Internet-based entities, such as eBay and Yahoo and (2) analytical support of a counterintelligence investigation. In addition, the NIPC has issued a variety of publications, most of which were compilations of information previously reported by others with some NIPC analysis. Strategic analysis to determine the potential broader implications of individual incidents has been limited. Such analysis looks beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of national importance. Identifying such threats assists in proactively managing risk, including evaluating the risks associated with possible future incidents and effectively mitigating the impact of such incidents. Three factors have hindered the NIPC’s ability to develop strategic analytical capabilities. First, there is no generally accepted methodology for analyzing strategic cyber-based threats. For example, there is no standard terminology, no standard set of factors to consider, and no established thresholds for determining the sophistication of attack techniques. According to officials in the intelligence and national security community, developing such a methodology would require an intense interagency effort and dedication of resources. Second, the NIPC has sustained prolonged leadership vacancies and does not have adequate staff expertise, in part because other federal agencies had not provided the originally anticipated number of detailees. For example, as of the close of our review in February, the position of Chief of the Analysis and Warning Section, which was to be filled by the Central Intelligence Agency, had been vacant for about half of the NIPC’s 3-year existence. In addition, the NIPC had been operating with only 13 of the 24 analysts that NIPC officials estimate are needed to develop analytical capabilities. Third, the NIPC did not have industry-specific data on factors such as critical system components, known vulnerabilities, and interdependencies. Under PDD 63, such information is to be developed for each of eight industry segments by industry representatives and the designated federal lead agencies. However, at the close of our work in February, only three industry assessments had been partially completed, and none had been provided to the NIPC. To provide a warning capability, the NIPC established a Watch and Warning Unit that monitors the Internet and other media 24 hours a day to identify reports of computer-based attacks. As of February, the unit had issued 81 warnings and related products since 1998, many of which were posted on the NIPC’s Internet web site. While some warnings were issued in time to avert damage, most of the warnings, especially those related to viruses, pertained to attacks underway. The NIPC’s ability to issue warnings promptly is impeded because of (1) a lack of a comprehensive governmentwide or nationwide framework for promptly obtaining and analyzing information on imminent attacks, (2) a shortage of skilled staff, (3) the need to ensure that the NIPC does not raise undue alarm for insignificant incidents, and (4) the need to ensure that sensitive information is protected, especially when such information pertains to law enforcement investigations underway. However, I want to emphasize a more fundamental impediment. Specifically, evaluating the NIPC’s progress in developing analysis and warning capabilities is difficult because the federal government’s strategy and related plans for protecting the nation’s critical infrastructures from computer-based attacks, including the NIPC’s role, are still evolving. The entities involved in the government’s critical infrastructure protection efforts do not share a common interpretation of the NIPC’s roles and responsibilities. Further, the relationships between the NIPC, the FBI, and the National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism at the National Security Council are unclear regarding who has direct authority for setting NIPC priorities and procedures and providing NIPC oversight. In addition, the NIPC’s own plans for further developing its analytical and warning capabilities are fragmented and incomplete. As a result, there are no specific priorities, milestones, or program performance measures to guide NIPC actions or provide a basis for evaluating its progress. The administration is currently reviewing the federal strategy for critical infrastructure protection that was originally outlined in PDD 63, including provisions related to developing analytical and warning capabilities that are currently assigned to the NIPC. Most recently, on May 9, the White House issued a statement saying that it was working with federal agencies and private industry to prepare a new version of a “national plan for cyberspace security and critical infrastructure protection” and reviewing how the government is organized to deal with information security issues. Our report recommends that, as the administration proceeds, the Assistant to the President for National Security Affairs, in coordination with pertinent executive agencies, establish a capability for strategic analysis of computer-based threats, including developing related methodology, acquiring staff expertise, and obtaining infrastructure data; require development of a comprehensive data collection and analysis framework and ensure that national watch and warning operations for computer-based attacks are supported by sufficient staff and resources; and clearly define the role of the NIPC in relation to other government and private-sector entities. PDD 63 directed the NIPC to provide the principal means of facilitating and coordinating the federal government’s response to computer-based incidents. In response, the NIPC has undertaken efforts in two major areas: providing coordination and technical support to FBI investigations and establishing crisis management capabilities. First, the NIPC has provided valuable coordination and technical support to FBI field offices, which have established special squads and teams and one regional task force in its field offices to address the growing number of computer crime cases. The NIPC has supported these investigative efforts by (1) coordinating investigations among FBI field offices, thereby bringing a national perspective to individual cases, (2) providing technical support in the form of analyses, expert assistance for interviews, and tools for analyzing and mitigating computer-based attacks, and (3) providing administrative support to NIPC field agents. For example, the NIPC produced over 250 written technical reports during 1999 and 2000, developed analytical tools to assist in investigating and mitigating computer-based attacks, and managed the procurement and installation of hardware and software tools for the NIPC field squads and teams. While these efforts have benefited investigative efforts, FBI and NIPC officials told us that increased computer capacity and data transmission capabilities would improve their ability to promptly analyze the extremely large amounts of data that are associated with some cases. In addition, FBI field offices are not yet providing the NIPC with the comprehensive information that NIPC officials say is needed to facilitate prompt identification and response to cyber incidents. According to field office officials, some information on unusual or suspicious computer-based activity has not been reported because it did not merit opening a case and was deemed to be insignificant. The NIPC has established new performance measures related to reporting to address this problem. Second, the NIPC has developed crisis management capabilities to support a multiagency response to the most serious incidents from the FBI’s Washington, D.C., Strategic Information Operations Center. Since 1998, seven crisis action teams have been activated to address potentially serious incidents and events, such as the Melissa virus in 1999 and the days surrounding the transition to the year 2000, and related procedures have been formalized. In addition, the NIPC has coordinated development of an emergency law enforcement plan to guide the response of federal, state, and local entities. To help ensure an adequate response to the growing number of computer crimes, we are recommending that the Attorney General, the FBI Director, and the NIPC Director take steps to (1) ensure that the NIPC has access to needed computer and communications resources and (2) monitor implementation of new performance measures to ensure that field offices fully report information on potential computer crimes to the NIPC. Information sharing and coordination among private-sector and government organizations are essential to thoroughly understanding cyber threats and quickly identifying and mitigating attacks. However, as we testified in July 2000,establishing the trusted relationships and information-sharing protocols necessary to support such coordination can be difficult. NIPC efforts in this area have met with mixed success. For example, the InfraGard Program, which provides the FBI and the NIPC with a means of securely sharing information with individual companies, has gained participants. In January 2001, NIPC officials announced that 518 organizations had enrolled in the program, which NIPC officials view as an important element in building trust relationships with the private sector. However, of the four information sharing and analysis centers that had been established as focal points for infrastructure sectors, a two-way, information-sharing partnership with the NIPC had developed with only one—the electric power industry. The NIPC’s dealings with two of the other three centers primarily consisted of providing information to the centers without receiving any in return, and no procedures had been developed for more interactive information sharing. The NIPC’s information-sharing relationship with the fourth center was not covered by our review because the center was not established until mid-January 2001, shortly before the close of our work. Similarly, the NIPC and the FBI had made only limited progress in developing a database of the most important components of the nation’s critical infrastructures—an effort referred to as the Key Asset Initiative. While FBI field offices had identified over 5,000 key assets, the entities that own or control the assets generally had not been involved in identifying them. As a result, the key assets recorded may not be the ones that infrastructure owners consider to be the most important. Further, the Key Asset Initiative was not being coordinated with other similar federal efforts at the Departments of Defense and Commerce. In addition, the NIPC and other government entities had not developed fully productive information-sharing and cooperative relationships. For example, federal agencies have not routinely reported incident information to the NIPC, at least in part because guidance provided by the federal Chief Information Officers Council, which is chaired by the Office of Management and Budget, directs agencies to report such information to the General Services Administration’s Federal Computer Incident Response Capability. Further, NIPC and Defense officials agreed that their information-sharing procedures need improvement, noting that protocols for reciprocal exchanges of information had not been established. In addition, the expertise of the U.S. Secret Service regarding computer crime had not been integrated into NIPC efforts. The NIPC has been more successful in providing training on investigating computer crime to government entities, which is an effort that it considers an important component of its outreach efforts. From 1998 through 2000, the NIPC trained about 300 individuals from federal, state, local, and international entities other than the FBI. In addition, the NIPC has advised five foreign governments that are establishing centers similar to the NIPC.
To better protect the nation's critical computer-dependent infrastructures from computer-based attacks and disruption, the President issued a directive in 1998 that established the National Infrastructure Protection Center as a national focal point for gathering information on threats and facilitating the federal government's response to computer-based incidents. This testimony discusses the center's progress in (1) developing national capabilities for analyzing cyber threat and vulnerability data and issuing warnings, (2) enhancing its capabilities for responding to cyber attacks, and (3) developing outreach and information-sharing initiatives with government and private-sector entities. GAO found that although the center has taken some steps to develop analysis and warning capabilities, the strategic capabilities described in the presidential directive have not been achieved. By coordinating investigations and providing technical assistance the center has provided important support that has improved the Federal Bureau of Investigations' ability to investigate computer crimes. The center has also developed crisis management procedures and drafted an emergency law enforcement sector plan, which is now being reviewed by sector members. The center's information-sharing relationships are still evolving and will probably have limited effectiveness until reporting procedures and thresholds are defined and trust relationships are established. This testimony summarized an April 2001 report (GAO-01-323).
Since its inception, TSA has been focused on meeting an urgent mandate to deploy more than 55,000 airport passenger and baggage screening personnel and equipment to secure the nation’s airways. To do so, it created basic organizational and acquisition infrastructures. To date, however, TSA has not developed an acquisition infrastructure that facilitates successful management and execution of acquisition activities, helps ensure that the agency acquires quality goods and services at reasonable prices, and supports informed decisions about acquisition strategy. Specifically, our review of TSA’s acquisition function and inspector general reports identified a number of challenges in each of the four areas we assessed. Organizational alignment and leadership: TSA’s Office of Acquisition is at an organizational level too low to oversee the acquisition process, coordinate acquisition activities, and enforce acquisition policies effectively. The position of the office hinders its ability to help ensure that TSA follows the acquisition processes that enable the agency to get the best value on goods and services. Senior acquisition officials told us that the Office of Acquisition is not appropriately placed within TSA; however, TSA has not elevated the office to a position that would enable it to coordinate agencywide acquisition activities or enforce acquisition policies. Policies and processes: Because TSA’s acquisition policies and processes emphasize personal accountability, good judgment, justifiable business decisions, and integrated acquisition teams, effective implementation of TSA’s policies and processes depends on clear communication, measures to evaluate performance, and incentives to reward good acquisition practices. Effective implementation of TSA’s policies and processes has been hindered, however, by several factors: (1) TSA has not effectively communicated its acquisition policies throughout the agency; (2) TSA lacks internal controls to identify and address implementation issues and performance measures to determine whether TSA’s acquisition policies achieve desired outcomes; and (3) TSA’s deadline-driven culture fails to reinforce the importance of complying with policies. Human capital: TSA risks an imbalance in the size and capabilities of its acquisition workforce that could diminish the performance of the acquisition function throughout the agency. TSA’s Office of Acquisition worked closely with the Department of Homeland Security to develop and begin implementing an acquisition workforce plan. However, TSA’s Human Resource Office, which is responsible for recruiting and hiring the acquisition workforce agencywide, did not participate in developing the acquisition workforce plan. Without input from the Human Resources Office, it is not clear that the workforce plan can be effectively implemented throughout the agency. In addition, the Office of Acquisition reports that it is having difficulty attracting, developing, and retaining a workforce with the acquisition knowledge and skills required to accomplish TSA’s mission. Knowledge and information management: While TSA is participating in the Department of Homeland Security’s efforts to develop functional requirements for an enterprisewide solution that supports the department’s resource management functions—including procurement and finance—TSA does not currently have the strategic information needed to support effective acquisition management decisions. To manage on a day-to-day basis, program and acquisition managers are relying on data derived from informal, ad hoc systems—which are often out-of-date, incomplete, inaccurate, or otherwise unreliable. TSA is in the process of adopting the Coast Guard’s procurement and financial systems as interim solutions until the Department of Homeland Security implements departmentwide systems. However, near-term improvements to TSA’s acquisition outcomes will be difficult until TSA has critical financial and procurement information systems that allow decision makers to track spending, manage budgets, and collect detailed data on goods and services, suppliers, and spending patterns. We are making four recommendations to the Secretary of Homeland Security to help improve TSA’s acquisition capabilities by elevating the placement of TSA’s Office of Acquisition, developing adequate internal controls and performance measures, addressing the needs of the acquisition workforce, and assessing proposed knowledge management systems. In written comments on a draft of this report, the Department of Homeland Security generally concurred with our report and recommendations. They also provided additional information about various initiatives related to our recommendations. Two months after the September 11, 2001, terrorist attacks, the President signed the Aviation and Transportation Security Act, establishing TSA as a new administration within the Department of Transportation responsible for securing the nation’s transportation systems. In February 2002, TSA assumed responsibility for aviation screening, and by November 2002, the agency had deployed a federal security screener workforce in the nation’s 429 commercial airports. In March 2003, TSA, along with 21 other agencies, began transferring to the Department of Homeland Security. Figure 1 shows the timeline of TSA’s brief history. When TSA was established in November 2001, the agency had no personnel, no organizational structure, no policies and processes, and no legacy systems. To begin operating, TSA adopted some of the Department of Transportation’s infrastructure components, such as the financial management system, and developed others in-house, such as a procurement tracking system. In addition, the Aviation and Transportation Security Act directed TSA to adopt the Federal Aviation Administration’s (FAA) Acquisition Management System, which establishes policy, processes, and guidance for all aspects of the acquisition life cycle. The act gives TSA’s administrator the latitude to make modifications to FAA’s system as appropriate. Because FAA is, by law, generally exempt from federal acquisition laws as well as the Federal Acquisition Regulation (FAR), and TSA was directed to adopt FAA’s system, TSA is also exempt from these requirements. TSA has relied on contractors to accomplish much of its mission. In fiscal year 2002, TSA obligated more than $3.7 billion for goods and services procured under contracts awarded by TSA, FAA, and the Department of Transportation. TSA currently has contracts to manage human resource needs, including recruiting, hiring, training, and outfitting passenger and baggage screeners; develop and manufacture screening equipment; and provide the information technology systems the agency uses to manage its day-to-day operations. These contracts represent about 48 percent of TSA’s fiscal year 2003 budget. TSA’s large expenditures on goods and services have prompted reviews by the Inspectors General of the Departments of Transportation and Homeland Security, who found a lack of contractor oversight and significant cost overruns. For example, in January 2004, the Department of Homeland Security Inspector General reported that inadequate contractor oversight contributed to TSA’s lack of timely background checks on screeners at airports. Specifically, inadequate oversight of contractors contributed to more than 500 boxes of background check documentation remaining unprocessed for months. The Department of Transportation’s Inspector General also found that inadequate oversight and contracts without clearly defined deliverables caused the cost of TSA’s initial contracts to balloon. For example, TSA’s initial human resource contract to recruit, screen, hire, and train screeners grew from $100 million to $700 million within 1 year. In late 2003, the Department of Homeland Security’s Inspector General cited integrating the procurement functions of the department’s component organizations as a major management challenge, adding that some of the procurement functions lacked important management controls. Despite TSA’s efforts to resolve initial problems, the Inspector General cited TSA as an example of an agency lacking procurement management controls. In a 2004 update to the previous year’s report on management challenges, the Inspector General’s Office reported that TSA was taking steps to address weaknesses in contract oversight, such as increasing the size of its contract management staff, devising policies and procedures that require adequate procurement planning, and arranging for the Defense Contract Audit Agency to perform over 130 contract audits and support contract administration. The appropriate placement of the acquisition function within an agency can facilitate efficient and effective management of acquisition activities. In our work on best practices, we learned that leading companies elevated or expanded the role of the company’s acquisition organization; designated commodity managers to oversee key services; and made extensive use of cross-functional teams to help identify the company’s service needs, conduct market research, evaluate and select providers, and manage performance. To cut across traditional organizational boundaries that contributed to a fragmented approach to acquiring services, these companies generally restructured their procurement organizations, typically assigning them greater responsibility and authority for strategic planning, management, and oversight of the companies’ service spending. In making such changes, the companies acknowledged that acquisition is an important strategic function and that success in this area contributes to the accomplishment of company missions. These changes transformed the role of the purchasing unit from one focused on mission support to one that was strategically important to the company’s bottom line. Recent legislation recognizes the importance of placing the acquisition function at an appropriate level and mandates that most executive departments appoint a chief acquisition officer. This official will have the responsibility to monitor the performance of acquisition activities and programs; evaluate the performance of acquisition programs; increase the use of full and open competition; increase the use of performance-based contracting; establish clear lines of authority, accountability, and responsibility for acquisition decision making; manage the direction of the agency’s acquisition policies; advise the head of the executive agency regarding the appropriate business strategy to achieve the mission of the executive agency; and develop and maintain an acquisition career management program to ensure that there is an adequate acquisition workforce. TSA’s Office of Acquisition is at an organizational level too low to oversee the acquisition process, coordinate acquisition activities, and enforce acquisition policies effectively. As shown in figure 2, the Office of Acquisition is at a lower level than other key offices involved in the acquisition process. Its current position within the organizational structure essentially relegates acquisition to the status of one of many administrative functions. The placement of the Office of Acquisition hinders the ability of the office to oversee the acquisition process and to coordinate with other offices involved in that process—responsibilities that are particularly critical given that almost half of TSA’s budget is spent on acquisitions. This issue was also noted by senior acquisition officials who told us that the Office of Acquisition is not appropriately placed within TSA. Further, the Chief Support Systems Officer said that adjustments to the placement of the office might be worthy of consideration. From its current position, the office has not been able to coordinate the agency’s acquisition activities or enforce acquisition policies throughout the agency. This has resulted in certain inefficiencies, as the following examples demonstrate: Senior acquisition officials told us that some program offices within TSA bypass the Office of Acquisition at key points in the acquisition process or fail to consult the contracting officer early enough in the acquisition process—typically without consequence. For example, program offices have submitted purchase requests without allowing adequate time for planning, requiring the contracting officers to spend additional time on tasks such as rewriting the requirements or performing additional market research to ensure that the goods or services purchased satisfy the program offices’ needs. Had the program offices consulted with the Office of Acquisition earlier in the process, much of the additional work could have been avoided. Different support offices involved in managing contracts for the airport screening function did not coordinate when contracting for airport screening equipment because, according to a senior operations official, TSA failed to ensure effective communication between the support offices. As a result, the number of personnel needed to operate the additional equipment was not sufficient once the equipment was installed. Reports from the Inspectors Generals of the Department of Homeland Security and the Department of Transportation also noted that TSA’s program offices failed to plan for and coordinate contractor oversight. While TSA’s Office of Acquisition now requires program offices to have oversight plans in place before major contracts are awarded, the office does not have the means to ensure that oversight plans are actually implemented. The lack of recognition of the importance of the acquisition function was also evident at a senior level of the organization. For example, an acquisition official told us that a representative from the Office of Acquisition was not initially consulted when senior TSA officials were developing strategies for responding to questions following congressional testimony on TSA’s acquisition problems. Implementing strategic acquisition decisions to achieve agencywide outcomes requires clear, transparent, and consistent policies and processes that govern the planning, award, administration, and oversight of acquisitions. Agency policies and processes, along with their rationale, need to be clearly communicated to all involved in the acquisition function. In addition, appropriate actions are needed to ensure acquisition personnel understand the organization’s acquisition policies and processes, as well as their roles and responsibilities in adhering to them. Appropriate internal controls to ensure that acquisition personnel follow policies and processes, and performance measures to assess the effectiveness of policies and processes in achieving desired outcomes, also are needed. In our work on best practices, we learned that maintaining clear lines of communication among all organizations involved in the acquisition function, and using performance measures to evaluate acquisition processes, were critical to successfully implementing strategic approaches to acquisition. Leading companies also found that the use of metrics increased the likelihood that acquisition processes would be successfully implemented. Metrics can be used to assess an organization’s current performance level, identify the critical processes that require focused management attention, obtain the knowledge needed to set realistic goals for improvement, and document results over time. Since TSA was established, it has issued management directives, policy letters, and guidance on its acquisition policies and processes. These have been based on FAA policy and guidance for acquiring goods and services and on Department of Homeland Security directives. TSA’s policies and processes emphasize personal accountability, good judgment, justifiable business decisions, and integrated acquisition teams. However, the following examples indicate that several of these policies have not been effectively implemented. Despite acquisition policies regarding contractor oversight, the Department of Homeland Security’s Inspector General reported that a lack of adequate oversight on TSA’s early contracts resulted in airline passenger screeners being allowed to begin work without completing a criminal history records check or to continue to work with adverse background checks. Some screeners who had been hired failed background checks, were determined to be ineligible, and were subsequently fired. TSA also failed to implement policies and processes intended to ensure coordination of acquisition activities. TSA guidelines call for integrated product teams—which may include representatives from program, technical, finance, contracting, and legal offices—to coordinate key acquisition activities and to work together to make decisions throughout the acquisition process. According to TSA acquisition officials, however, such teams are often not formed, and there is currently no formal process for doing so. Without such teams, TSA risks having acquisition activities that are not well coordinated and key decisions that fail to take into account all essential considerations. TSA’s acquisition policies and processes have not been effectively communicated throughout the agency. The Office of Acquisition has implemented training initiatives in an attempt to educate key staff about their responsibilities, but officials across the agency told us that they or their staffs are unclear about their roles and responsibilities in the acquisition process. Because TSA’s personnel were hired from other agencies and from the private sector, which may have defined their roles and responsibilities differently, it is critical that personnel throughout TSA have a clear understanding of the agency’s acquisition policies and processes. Personnel must be properly trained in applying the flexibilities inherent in TSA’s acquisition policies to ensure fair and open competition and effective procurement practices. TSA lacks performance measures to determine whether its acquisition policies are achieving desired outcomes. TSA also lacks internal controls to identify and address implementation issues. Each office within TSA is responsible for developing its own performance measures. TSA’s Office of Acquisition has measures to track the number of contracts awarded and the amount of the awards, but does not have measures to assess how well personnel carry out acquisition activities, such as oversight. For example, TSA does not track the number of contracts awarded that include incentives for performance, such as performance-based contracts or contracts with fees based on different levels of performance. The Office of Acquisition is currently considering the use of customer satisfaction measures used by the U.S. Navy and the Department of Transportation to determine whether they would be suitable for measuring the office’s performance. TSA’s template for individual performance agreements attempts to tie individual goals to organizational goals. However, the template has no acquisition-specific goals. Supervisors may add acquisition-specific goals for individuals. Without such goals it may be difficult to hold individuals accountable for performance on acquisition activities. TSA’s deadline-driven culture fails to reinforce the importance of compliance with policies. TSA officials acknowledged, and Inspector General reports and testimony confirmed, that TSA initially sacrificed cost concerns and disciplined acquisition practices in order to meet schedules. As a result, TSA created a culture that prioritized meeting deadlines at the expense of other acquisition goals. TSA’s initial deadlines for deploying its screener workforce were met. However, senior officials from multiple TSA offices told us that the agency has maintained its sense of urgency, and that program offices still expect acquisition functions to be accomplished quickly, even if the appropriate acquisition practices are not always followed. Our review of 21 contract files showed that TSA did not always use practices that help to ensure quality and cost efficiency. While some of the problems we identified in early contracts stemmed from shortcuts taken to meet urgent deadlines, the persistence of such problems suggests that TSA did not consistently follow disciplined acquisition practices. TSA’s policies require that the agency perform quality assurance, but several contract files we reviewed contained little evidence that contract oversight or quality assurance was performed. For example, the contract files for background investigations, baggage screening, and engineering and technical services contained no evidence of oversight or quality assurance plans. TSA’s policies also require that performance metrics be identified in the requirements of complex contracts. Our review of the contract files found that TSA failed to develop performance metrics for the contractor prior to award. Instead, TSA often asked the contractor to develop these plans or metrics after award. Because of problems with inadequate contractor oversight, however, TSA now requires that an oversight plan be in place prior to award. Several of the contract files we reviewed were cost reimbursement or time and materials contracts and did not contain evidence of government surveillance to ensure cost efficiency. Of the 21 contracts we reviewed, 6 were awarded on a fixed-price basis, and 9 were awarded on a cost- reimbursable or time and materials basis, while 6 were a combination. Cost-reimbursement and time and materials contracts are generally only suitable when appropriate government surveillance during performance will provide reasonable assurance that efficient methods and effective cost controls are used. Frequent use of cost-reimbursement and time and materials contracts, coupled with the inspector generals’ findings that TSA failed to monitor its contracts, diminishes the assurance that efficient methods and cost controls are being used. TSA generally used single-source contracts judiciously for the contract files we reviewed. TSA’s policies, which encourage competition as the preferred method of contracting, state that use of single-source contracts is permitted when necessary to accomplish TSA’s mission and merely require that a rational basis for the decision be documented. Of the 21 contract files we reviewed, 3 were for single-source contracts. The remaining contracts were awarded using a variety of procurement methods, including use of existing government contracts, the Federal Supply Schedule, required government supply sources, and competitive procedures. In the 3 single-source contract awards we reviewed, the contract files contained justifications for using such noncompetitive procedures as required by TSA’s policies. The justifications for awarding contracts on a single-source basis varied. In one case, for example, the agency identified a need for additional office furniture to be integrated with furniture systems already installed at the work site. The justification stated that the furniture components were not interchangeable between manufacturers and that it would be more costly to hire a different contractor to perform follow-on work. (See appendix IV for a description of the other contracts we reviewed.) A strategic human capital management approach enables an agency to recruit, develop, and retain the right number of personnel with the right skills to accomplish its mission effectively. Through our work on human capital management, we have found that high-performing organizations identify their current and future human capital needs and then develop acquisition workforce plans containing strategies—such as targeted investments in employees or recruiting and retention bonuses—to meet these needs. These plans enable the organization to address the critical skills and competencies needed to achieve results. Strategic human capital approaches need sufficient resources. Senior managers should devote adequate resources to recruiting, hiring, developing, rewarding, and retaining talented personnel. Succession planning is also needed to ensure that the workforce is composed of the right number of personnel with the necessary skills and qualifications to perform the acquisition function into the future. Changes in the required skill sets of the acquisition workforce, coupled with the prospect of a decline in experienced acquisition personnel throughout the government, make the need for acquisition workforce planning more significant. Industry and government experts alike recognize that having the right people with the right skills is key to making a successful transformation toward a more effective acquisition environment. Over the last decade, the emergence of several procurement trends, including a rise in services contracting, has created a need for acquisition workers with a much greater knowledge of market conditions, industry trends, and the technical details of the commodities and services they procure. TSA risks an imbalance in its acquisition workforce that could diminish the performance of the acquisition function throughout the agency. With almost half of TSA’s fiscal year 2003 budget devoted to acquisition, a qualified and trained workforce is critical to ensuring the efficiency of TSA’s acquisition activities. The Department of Homeland Security has developed a departmentwide acquisition workforce plan, and TSA began implementing it in February 2004. The plan focuses on formalizing competencies and skill sets; establishing certification standards; and identifying training requirements, first for contracting specialists and then for acquisition professionals in other key career fields—including program management, financial management, engineering, and information technology. Subsequent phases of the plan include establishing career paths, targeting positions for recruitment, establishing mentoring programs, and creating a strategy for succession planning. TSA’s Office of Acquisition contributed significantly to the Department of Homeland Security’s acquisition workforce plan. However, TSA’s Office of Human Resources, which is responsible for recruiting and hiring and for succession planning throughout TSA—including the acquisition workforce, did not participate in developing the plan. Since there are many acquisition-related positions in offices throughout TSA, the involvement of key TSA personnel offices—particularly the Human Resources Office—is important to the success of the acquisition workforce plan. It is not yet clear how effective the acquisition workforce plan will be, given that an office responsible for key aspects of implementation did not participate in the plan’s development. In responding to a draft of this report, TSA officials commented that the Human Resources Office was unable to participate in departmentwide acquisition workforce planning because it has been facing critical day-to-day problems associated with supporting growth in the workforce throughout TSA. After functioning for just over 1 year, the office has hired a manager who will be working on their human capital strategic planning in conjunction with the overall departmental human capital planning effort. They also said that TSA is drafting a Human Capital Officer Strategy that will focus on identifying career paths for all occupations, including acquisitions and contracting. Effective implementation of the acquisition workforce plan is all the more important because acquisition officials face challenges in attaining sufficient staffing levels. Office of Acquisition officials are concerned that their staff of 61 is not adequate to support the mission. In January 2003, the Deputy Assistant Administrator of the Office of Acquisition conducted a study to determine appropriate staffing levels for TSA’s Office of Acquisition. This study assumed contract awards in excess of $4 billion per year, to estimate staffing requirements. Using three different benchmarks that attempted to estimate staffing needs based on total awarded value of contracts, the study concluded that the Office of Acquisition would require a staff of between 179 and 628 employees. We did not conduct an independent assessment of this TSA study to verify the validity of the study’s results. Further, the Office of Human Resources has not conducted similar studies to determine appropriate staffing levels for other acquisition professionals not assigned to the Office of Acquisition. TSA’s Office of Acquisition has been challenged in trying to maintain its existing acquisition workforce. According to TSA acquisition officials, attrition among its contracting workforce has been a problem. In the time period from March 2002 to December 2003, TSA’s Office of Acquisition experienced attrition of approximately 22 percent of its contracting workforce. To identify the causes for attrition, the Office of Acquisition began conducting exit interviews. According to acquisition officials, attrition is a result of the heavy workload, as well as a lack of incentives, such as tuition reimbursement and performance awards. TSA’s human resource officials have not monitored the acquisition workforce throughout the agency to determine if there are similar troubles retaining acquisition professionals outside the Office of Acquisition. TSA’s human resources officials said they conducted a job satisfaction survey and plan to begin conducting exit interviews, but acknowledged that the survey and exit interviews would be concerned primarily with screener satisfaction. Efforts to hire acquisition professionals to work in the Office of Acquisition have been undercut by the limited number of qualified applicants and possible negative perceptions about TSA. According to TSA acquisition officials, there is a lack of applicants with adequate acquisition experience, and TSA is competing with other agencies that offer more generous benefits, such as tuition reimbursement and clear career tracks. TSA is authorized to use recruiting and retention incentives. However, according to officials, the agency has not provided funding for these types of incentives. Further, qualified applicants are difficult to recruit because TSA’s role within the Department of Homeland Security is not clearly understood and TSA has a reputation for long work hours. Human resources officials admitted that their focus is primarily on screeners, and they do not know whether other offices are experiencing similar difficulties hiring acquisition professionals. In addition, acquisition and training officials told us that training funds for the acquisition workforce are very limited. Training officials said that funds are sufficient for meeting federal training mandates; however, there are no additional training funds for further professional development. Acquisition officials told us that funding for training Office of Acquisition personnel is limited to $1,000 per year per employee—an amount that acquisition officials say is insufficient to train staff who came to TSA without prior contracting experience. The Office of Acquisition’s training funds do not cover training of other acquisition professionals outside this office. Without sufficient training funds, TSA is able to provide few professional development opportunities for the acquisition workforce— limiting career growth. To address the most critical training needs for the acquisition workforce outside the Office of Acquisition—such as program managers, contracting officers’ representatives, and technical monitors— TSA’s Office of Acquisition has proactively developed workshops in-house. However, these workshops are not mandatory for the acquisition workforce. To make strategic, mission-focused acquisition decisions, organizations need knowledge and information management processes and systems that produce credible, reliable, and timely data about the goods and services acquired and the methods used to acquire them. Leading companies use procurement and financial management systems to gather and analyze data to identify opportunities to reduce costs, improve service levels, measure compliance and performance, and manage service providers. For example, organizations need integrated financial management systems that provide reliable, accurate, relevant, and timely financial data to help ensure dollars are well spent. Such data are needed to estimate and control program costs, support funding decisions, and oversee contract spending. Many leading organizations have already implemented an enterprisewide system to integrate financial and operating data to support both management decision-making and external reporting requirements. In a 1994 study of fundamental practices that led to performance improvements in leading private and public organizations, we reported that electronic business system initiatives must be focused on process improvements. Information systems that simply use technology to do the same work the same way, although faster, typically fail, or reach only a fraction of their potential. In May 2000, we reported that when developing new electronic business processes, it is important to ensure that current business processes are working well before applying new technology. In fact, agency heads are required by statute to analyze an agency’s mission and revise mission-related and administrative processes, as appropriate, before making significant investment in information technology that is to be used in support of the performance of those missions. Not improving business processes prior to investing in new technology creates the risk of merely automating inefficient ways of doing business. While TSA is participating in the Department of Homeland Security’s efforts to develop functional requirements for an enterprisewide solution that supports the department’s resource management functions— including finance and procurement, TSA does not currently have the strategic information needed to support effective acquisition management decisions. Near-term improvements to TSA’s acquisition outcomes will be difficult until TSA has critical knowledge management systems, such as financial and procurement information systems, that allow decision makers to track spending and manage budgets and collect detailed data on goods and services, suppliers, and spending patterns. Despite the fact that TSA lacks detailed information on the goods and services it purchases, some aggregate data is available. TSA is an active participant in the Department of Homeland Security’s strategic sourcing program, which is using the aggregate data to develop a strategy that will allow the department to leverage its buying for particular commodities. To manage on a day-to-day basis, inform acquisition decisions, and oversee contracts, program and acquisition managers are relying on data derived from informal, ad hoc systems—which are often out of date, incomplete, inaccurate, or otherwise unreliable. TSA’s Office of Acquisition is temporarily relying on an Access database developed in- house to track manually entered procurement information and make acquisition decisions. However, the temporary database does not contain enough information to analyze purchases or measure the acquisition function’s performance. For example, a TSA official told us that when a congressional committee asked for a list of sole-source contracts, TSA officials had to compile the list manually, by asking contracting officers which contracts had been awarded on a sole-source basis, because this information was not in TSA’s database. Further, the database does not automatically track the status of a procurement request. Currently, program officials must contact the Office of Acquisition to determine the progress being made on a procurement request—relying on manually compiled paper files, which are frequently incomplete or inaccurate, to track the status of a purchase. TSA is now voluntarily reporting its contract actions to the Homeland Security Contract Information System, which feeds into the Federal Procurement Data System. This system can produce some aggregate data, but lacks detailed information on goods and services purchased. Until a departmentwide solution is developed, TSA’s Office of Acquisition is planning to adopt the Coast Guard’s procurement information system as a faster and more cost-efficient way of obtaining the basic capability to track purchase requests and write contracts. But TSA officials told us that, in its current configuration, the system does not have all the components necessary to enhance strategic acquisition decisions or enable effective evaluation and assessment of acquisition outcomes. An additional challenge to data collection and analysis is TSA’s financial management system. According to TSA officials, the agency’s current financial management system, run by the Department of Transportation, does not provide the information needed to track financial events, summarize financial information, or otherwise provide critical acquisition- related information. For example, because program offices do not have access to reliable financial information, program budget officials cannot certify funds availability to approve a procurement request. As a result, the Office of Finance must certify funds availability centrally. According to finance officials, the inability to track spending has also resulted in difficulties in processing invoices and procurement requests. Here too, while the department-level enterprise architecture effort is proceeding, TSA is in the process of adopting the Coast Guard’s financial management system, which TSA finance officials say is more user-friendly and provides better reporting capabilities and access than the system TSA currently uses. It is unclear, however, whether the Coast Guard’s financial management software will facilitate TSA’s financial accountability activities. Independent auditors gave the Department of Homeland Security’s financial statement a qualified balance sheet opinion based, for the most part, on problems with the Coast Guard’s financial statements. The Coast Guard was unable to provide sufficient documentation to support certain financial conditions prior to the completion of the audit. As a new agency, TSA was tasked to build an organization from the ground up to meet a critical and demanding mandate. TSA worked quickly to put a transportation security workforce in place, creating basic organizational and acquisition infrastructures and subordinating cost concerns and disciplined acquisition practices to meet deadlines. With the challenging initial mandate fulfilled, TSA has begun to build a permanent infrastructure. TSA now has the opportunity to build a model acquisition function based on best practices. The opportunity may be lost, however, if TSA fails to think strategically about the practices it uses to carry out its acquisition function. By assessing its existing organizational alignment, policies and processes, human capital approaches, and knowledge and information systems against a framework of best practices and in coordination with the Department of Homeland Security, TSA can identify weaknesses and risk areas to target for improvement. Attention from TSA’s leadership is needed to help TSA’s Office of Acquisition improve acquisition practices agencywide—focusing on all elements key to a successful acquisition program. Ensuring a strong workforce and developing well-built procurement and financial management systems, coupled with a strong message of compliance with policies and processes and supported by performance measures, would demonstrate the agency’s commitment to effective acquisition practices. To help ensure that TSA receives the goods and services it needs at the best value to the government, we recommend that the Secretary of Homeland Security direct the Administrator of the Transportation Security Administration to take the following three actions: Elevate the Office of Acquisition to an appropriate level within TSA to enable it to identify, analyze, prioritize, and coordinate agencywide acquisition needs. Develop an adequate system of internal controls, performance measures, and incentives to ensure that policies and processes for ensuring efficient and effective acquisitions are implemented appropriately. Direct the TSA Human Capital Office to do the following in coordination with key offices in the Department of Homeland Security: assess TSA’s current acquisition workforce (as defined by the Department of Homeland Security) to determine the number, skills, and competencies of the workforce; identify any gaps in the number, skills, and competencies of the current acquisition workforce; and develop strategies to address any gaps identified, including plans to attract, retain, and train the workforce. We also recommend that the Secretary of Homeland Security ensure that its planned departmentwide knowledge management system provides TSA sufficient data and analytic capability to measure and analyze spending activities and performance—and thereby highlight opportunities to reduce costs and improve service levels; support effective oversight of acquisitions; and facilitate the timely reporting of the agency’s acquisition activities and its compliance with acquisition policies and processes. In written comments on a draft of this report, the Department of Homeland Security generally concurred with our report and recommendations and stated that our identification of areas for improvement will help to develop the efficiency and effectiveness of TSA’s Office of Acquisition. In response to our recommendation to elevate the position of the Office of Acquisition, the department stated that the office has been elevated once before. We have acknowledged this in our report and note that the office was elevated before we began our review of TSA’s acquisition function. Our review found that even after the office was elevated, it remained at an organizational level too low to oversee the acquisition process, coordinate acquisition activities, and enforce acquisition policies effectively. The department further noted that the Department of Homeland Security’s Chief Procurement Officer is on par with the Chief Financial Officer and Chief Information Officer, stating that TSA will consider this option along with others as it works toward improving the efficiency and effectiveness of its acquisition program. Whichever option is chosen, we maintain that the Office of Acquisition should be elevated to an appropriate level within TSA to enable it to identify, analyze, prioritize, and coordinate agencywide acquisition needs. The department also commented that its Office of Human Resources has only been functioning as a distinct office for a year and that after focusing on establishing policies, processes, and effective contract management services, it has hired a manager for planning. TSA has committed to providing a more proactive approach to all human capital planning. The department also noted that it is moving towards the enterprisewide implementation of Oracle Financials and Prism starting in October 2004, stating that the knowledge management tools recommended in the draft report will be available to provide TSA sufficient data and analytic capability to evaluate its processes, performance, and spending. Our report acknowledges that TSA will be using these Coast Guard procurement and financial systems; however, we maintain that these systems do not have all the components necessary to enhance strategic acquisition decisions or enable effective evaluation and assessment of acquisition outcomes. As requested by your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees, the Secretary of Homeland Security, and the Administrator of the Transportation Security Administration. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-4841 or Blake Ainsworth, Assistant Director, at (202) 512-4609. Other major contributors to this report were Lara Laufer, Gordon Lusby, William Petrick, Shannon Simpson, Karen Sloan, Adam Vodraska, and Kelli Ann Walther. To review how well TSA is positioned to carry out its acquisition function, we used GAO’s previous best practices work as our criteria. Our studies of best business practices show four interrelated elements—organizational alignment, policies and processes, human capital, and information management— that help to promote good acquisition outcomes. We used each of the elements to assess TSA’s acquisition function. To assess TSA’s acquisition function across the four elements, we interviewed senior agency officials, including the Chief Support Systems Officer and a representative for the Acting Chief Operating Officer. We also interviewed management and staff within the Office of Acquisition regarding acquisition policy and processes, contracting training, program support, and quality assurance. To assess how well the organization is aligned to facilitate the integration of the acquisition function throughout the agency, we reviewed TSA organizational charts, process flowcharts, and presentations by agency officials on key roles and responsibilities to understand how the acquisition process is integrated into TSA’s organization. To assess leadership commitment to good acquisition, we also reviewed TSA’s strategic plan and investment review board meeting agendas, minutes, and investment criteria. For an understanding of organizational alignment and coordination, we interviewed the Chief Support Systems Officer and a representative for the Acting Chief Operating Officer. To assess how well acquisition activities are coordinated and carried out throughout the agency, we interviewed Assistant Administrators of all major Operations offices—Aviation Operations, Maritime and Land Security, Security Intelligence, and Operations Policy—and Mission Support offices— Finance and Administration, Human Resources, Information Technology, and Workforce Performance and Training. To determine TSA’s current policies and processes, we reviewed applicable laws and policies that granted TSA the flexibility to use and modify FAA’s system, and we also interviewed a member of TSA’s Legal Counsel. We analyzed FAA’s Acquisition Management System and reviewed TSA’s modified version of this guidance. To assess TSA’s progress towards developing and implementing policies and processes, we reviewed TSA and Department of Homeland Security memoranda, directives, internal newsletters, handbooks, quality assurance checklists, and policy documents. To analyze TSA’s effectiveness in implementing policies and processes, we interviewed Assistant Administrators for each of the Operations offices, as well as the Division Directors within the Office of Acquisition for each of the contracting support offices. Additionally, we interviewed management and staff within the Office of Acquisition regarding acquisition policy and processes, program support, and quality assurance. To assess TSA’s effectiveness in hiring, developing, and retaining its acquisition workforce, we interviewed Office of Acquisition management and staff, the Director and staff of the Workforce Performance and Training Office, and two Assistant Administrators for Human Resources. We also interviewed the Department of Homeland Security’s Acquisition Workforce Manager regarding the department’s Acquisition Workforce Plan and TSA’s role in its development and implementation. We reviewed documents and spoke with agency officials about acquisition workforce training requirements, available courses, and means of tracking acquisition training and other workforce data. In addition, we reviewed studies on TSA’s acquisition workforce size, one of which was conducted by a contractor on behalf of TSA. To help us assess how effectively the existing TSA information management system enables the agency to track and manage its acquisition process and facilitate strategic decision-making, officials from the Office of Acquisitions, Assistant Administrators for each of the Operations offices, and Division Directors within the Office of Acquisition explained TSA’s existing and planned information systems and outlined their information needs. The same officials explained the capabilities of the information management systems to perform acquisition transactions in support of TSA’s mission. Office of Acquisition staff discussed their data entry and internal control processes and shared supporting documentation to help us understand their current systems. To assess the capabilities and limitations of the financial management system and financial management processes, we interviewed the Chief Financial Officer and Chief Technology Officer, as well as additional Finance and Administration staff. To determine how much knowledge and information is available and accessible to TSA management, we reviewed the Office of Acquisition’s procurement database, and Strategic Sourcing operations documents. To assess TSA’s plans for future information systems, we reviewed documentation describing the operations of TSA’s systems, proposals for planned initiatives, and summaries of existing challenges. A representative from the office of Strategic Management & Analysis provided insight about TSA’s strategic direction. To gain further insight into how TSA’s infrastructure affects its acquisitions, we judgmentally selected 40 contract files for review. Using TSA’s database of contracts, we identified four types of contract actions from which to sample—new contracts, task and delivery orders, blanket purchase agreements, and purchase orders. In each category, we selected contracts based on award value. Nineteen of the 40 files were not available or removed from the sample for the following reasons: they were being closed out at a payment center, FAA awarded the contract rather than TSA, it represented a duplicate contract, it had already been reviewed by the Inspector General, they were actually interagency agreements, or it was being managed by the Defense Contract Management Agency. We reviewed the remaining 21 pre- and post-award contract files to assess key aspects of the acquisition process at TSA—such as requirements development, market research and analysis, acquisition planning, procurement method (including use of competitive procedures and single- source contracting), and contract administration. We conducted our review from July 2003 through March 2004 in accordance with generally accepted government auditing standards. Figure 3 shows key activities performed during TSA’s acquisition process. Table 1 identifies some of the major acquisition responsibilities associated with the roles of key personnel. Table 2 shows the Department of Transportation Inspector General testimony and reports reviewed to identify issues associated with the contracting process at the TSA. Table 3 shows the Department of Homeland Security Inspector General reports reviewed to identify problems associated with the contracting process at the Transportation Security Administration.
The Transportation Security Administration (TSA), within the Department of Homeland Security, was established to secure the nation's transportation systems, beginning with commercial airports. To meet its mission, TSA has awarded over $8.5 billion in contracts since its creation in 2001. Spending on contracts accounted for 48 percent of TSA's fiscal year 2003 budget. Because of TSA's reliance on contracts to carry out its mission, its acquisition infrastructure-- including oversight, policies and processes, acquisition workforce, and information about its acquisitions--is critical. GAO was asked to review TSA's acquisition infrastructure to assess how well TSA is positioned to carry out its acquisition function. Since its inception, TSA has been focused on meeting an urgent mandate to deploy more than 55,000 airport passenger and baggage screening personnel and equipment to secure the nation's airways. To do so, it created basic organizational and acquisition infrastructures. However, our review of TSA's acquisition function and inspector general reports identified a number of challenges in each of the four areas we assessed. Organizational alignment and leadership: TSA's Office of Acquisition is at an organizational level too low to oversee the acquisition process, coordinate acquisition activities, and enforce acquisition policies effectively. The position of the office hinders its ability to help ensure that TSA follows acquisition processes that enable the agency to get the best value on goods and services. Policies and processes: TSA's acquisition policies and processes emphasize personal accountability, good judgment, justifiable business decisions, and integrated acquisition teams. However, effective implementation of TSA's policies and processes has been hindered by several factors. For example, TSA has not effectively communicated its acquisition policies throughout the agency. TSA also lacks internal controls to identify and address implementation issues and performance measures to determine whether acquisition policies are achieving desired results. Human capital: TSA risks an imbalance in the size and capabilities of its acquisition workforce that could diminish the performance of the acquisition function throughout the agency. TSA's Office of Acquisition worked closely with the Department of Homeland Security to develop and begin implementing an acquisition workforce plan. However, TSA's Human Resource Office, which is responsible for recruiting and hiring the acquisition workforce agencywide, did not participate in developing the acquisition workforce plan. Without input from the Human Resources Office, it is not clear that the workforce plan can be effectively implemented throughout the agency. In addition, the Office of Acquisition reports that it is having difficulty attracting, developing, and retaining a workforce with the acquisition knowledge and skills required to accomplish TSA's mission. Knowledge and information management: while TSA is participating in the Department of Homeland Security's efforts to develop requirements for an enterprisewide solution, TSA does not currently have the strategic information needed to support effective acquisition management decisions. To manage on a day-to-day basis, program and acquisition managers are relying on data derived from informal, ad-hoc systems. TSA is in the process of adopting the Coast Guard's procurement and financial systems as interim solutions until the Department of Homeland Security implements a departmentwide system. However, near-term improvement in acquisition outcomes will be difficult because TSA does not have the data needed to analyze and improve its acquisition processes.
In its role as the nation’s tax collector, IRS is responsible for collecting taxes, processing tax returns, and enforcing the nation’s tax laws. Treasury’s FMS is the central disbursing authority for civilian agencies. With limited exceptions, FMS processes most disbursements for civilian agencies in the executive branch. FMS is also the federal government’s central debt collection agency. Since fiscal year 2000, FMS has operated the FPLP in conjunction with IRS to collect unpaid federal taxes, including tax debt owed by federal contractors. Since 1990, we have designated IRS’s enforcement of tax laws as a governmentwide high-risk area. In attempting to ensure that taxpayers fulfill their obligations, IRS is challenged on virtually every front. While IRS’s enforcement workload—measured by the number of taxpayer returns filed—has continually increased, until fiscal year 2005, the resources IRS has been able to dedicate to enforcing the tax laws have declined. Enforcement efforts are designed to increase compliance and reduce the tax gap. However, IRS recently reported that the gross tax gap, that is, the difference between what the taxpayers should pay on a timely basis and what they actually pay, exceed $300 billion annually. IRS estimated the gross tax gap to be between $312 billion and $353. IRS further reported that its enforcement activities, coupled with late payments, recover just $55 billion of that amount, leaving a net tax gap of from $257 billion to $298 billion. Preliminary IRS estimates indicate that noncompliance is from 15 percent to 16.6 percent of taxpayers’ true tax liability, which further fuels congressional and public concern that declines in IRS compliance and collections programs are eroding taxpayer confidence in the fairness of our federal tax system. In fiscal year 2004, FMS made over 940 million disbursements totaling over $1.5 trillion. FMS’s major disbursing activities include paying Social Security benefits, veterans’ compensation, federal tax refunds, federal salaries and pensions, and contractor and miscellaneous payments. For statutory and logistical reasons, a limited number of other governmental agencies, such as DOD and the U.S. Postal Service, have their own authority to disburse funds. Those agencies that have the authority to disburse federal funds are referred to as Non-Treasury Disbursing Offices. Although FMS is the disbursing agent for most of the federal government, that is, it physically writes the checks or sends the electronic payments, it does so on the behalf of, and at the direction of, the various federal agencies. Federal agencies may have multiple offices or locations that perform accounting for and preparation of payment information, referred to by FMS as agency locations or paying stations. To generate a payment, an agency payment location sends FMS a payment file, along with an accompanying payment certification requesting that FMS disburse funds. Agencies typically send the certification and detailed payment information in an automated form, and FMS loads the payment data into its payment system. Once loaded, FMS verifies that all payment requests were properly authorized and certified and that the amount on the payment file agrees with the certification amount before processing the payments for disbursement. FMS disburses federal funds via three main mechanisms: electronic funds transfer (EFT) via Automated Clearing House (ACH), Fedwire, and checks. Fedwire is also an EFT that provides for immediate transfers of funds from the government’s account in the Federal Reserve to the contractors’ bank accounts. According to FMS records, of the approximately $1.5 trillion disbursed by FMS in fiscal year 2004, about 66 percent was disbursed using ACH, 17 percent via Fedwire, and the remaining 17 percent as checks. Once payments are disbursed, payment information related to ACH and checks are sent to FMS’s Payments, Claims, and Enhanced Reconciliation (PACER) system, which maintains payment data and provides federal payment agencies online access to these data. Among other payments, PACER contained about 12.9 million contractor payments valued at $247 billion for fiscal year 2004. Unlike checks and ACH payments, detailed information regarding Fedwire payments is not sent to the PACER payment database. In 1996, Congress passed the Debt Collection Improvement Act 1996 (DCIA) to maximize the collection of delinquent nontax debts owed to federal agencies. As part of implementing its responsibilities under DCIA, Treasury established the TOP, to be administered by FMS, to centralize the process by which certain federal payments are withheld or reduced (offset) to collect delinquent nontax debts owed to federal agencies. Under the regulations implementing DCIA, FMS and other disbursing agencies are required to compare their payment records with debt recorded in the TOP database. If a match occurs, the disbursing agency must offset the payment, thereby reducing or eliminating the nontax debt. To improve collection of unpaid taxes, the Taxpayer Relief Act of 1997 authorized IRS to continuously levy up to 15 percent of specified federal payments made to businesses and individuals with unpaid federal taxes. The continuous levy program, now referred to as FPLP, was implemented in July 2000. The FPLP provides for the levy of various federal payments, including federal employee retirement payments, certain Social Security payments, selected federal salaries, and contractor payments. For payments disbursed by FMS on behalf of most federal agencies, the amount to be levied and credited to IRS is deducted before FMS disburses the payment. In fiscal year 2004, IRS received $114 million through the FPLP for delinquent taxes, $16 million of which was from payments to civilian contractors. IRS coordinated with FMS to utilize the TOP database as the means of collecting taxes under the FPLP. Each week IRS sends FMS an extract of its tax debt files containing updated account balances of tax debts that are already in TOP, the new tax debts that need to be added to TOP, and all taxes in TOP that need to be rescinded. These data are uploaded into TOP. For a payment to be levied through the FPLP, a debt has to exist in TOP and a payment has to be available. Figure 1 provides an overview of this process. FMS sends payment data to TOP to be matched against unpaid federal taxes. TOP electronically compares the names and TINs on the payment files to the control names (first four characters of the names) and TINs of the debtors listed in TOP. If there is a match and IRS has updated TOP to reflect that it has completed all legal notifications, the federal payment is reduced (levied) to help satisfy the unpaid federal taxes. To address issues raised by our February 12, 2004, report and testimony, a multi-agency task force was established to help improve the FPLP. The task force includes representatives from the Department of Defense, Defense Finance and Accounting Service, IRS, FMS, General Services Administration (GSA), Office of Management and Budget, and Department of Justice. The objectives of the task force were to (1) identify and implement short- term and long-term operational changes to improve federal tax compliance of DOD contractors, including increasing the number of tax debts and the number of DOD contractor payments available for matching through TOP, and (2) identify potential changes that would enhance efforts to address federal contractor tax delinquencies and prevent future occurrences of tax abuse by federal contractors. The task force issued its report in October 2004. In its report, the task force identified actions and made recommendations to improve tax compliance of federal contractors, including maximizing the number of delinquent tax debts that IRS makes available for matching, maximizing DOD payment information available for matching, increasing the effectiveness of the matching and levy processes, and preventing federal contract awards to those who abuse the tax system. A number of the improvements identified by the task force have already been implemented. Our analysis indicates that the failure to pay taxes among DOD contractors also exists among civilian agency contractors and totaled billions of dollars. Our analysis of FMS and IRS records indicates that during fiscal year 2004, FMS made payments on behalf of civilian agencies to about 33,000 federal contractors with over $3.3 billion in unpaid federal taxes as of September 30, 2004. We estimate that if there were no legal or administrative impediments to the levy program—if all unpaid federal taxes were considered and all payments to these 33,000 contractors with unpaid federal taxes were subjected to the 15 percent levy—FMS could have collected as much as $350 million in unpaid federal taxes from civilian contractors during fiscal 2004. Because some unpaid federal taxes are excluded due to statutory requirements, IRS and FMS would never be able to collect the entire amount. Over half of the $3.3 billion in tax debt was coded by IRS as being excluded from the levy program for statutory reasons, including contractors being in bankruptcy, having installment payment agreements, or awaiting the completion of the required legal notifications regarding the tax debt. However, many improvements can be made to lessen the tax levy collection gap. As will be discussed later in the report, the American Jobs Creation Act of 2004 increased the maximum levy to 100 percent of any specified payments to contractors for goods and services provided the federal government. When implemented, the maximum levy amount that could be collected is even greater. The amount of unpaid taxes for these contractors paid through Treasury FMS ranged from a small amount owed by an individual for a single tax period to a group of related businesses owing about $13 million for over 300 tax periods. Unpaid taxes owed by these contractors included payroll, corporate income, excise, unemployment, individual income, and other types of taxes. In the case of unpaid payroll taxes, employers withheld federal taxes from employees’ wages, but did not send the withheld payroll taxes or the employers’ matching amounts to IRS as required by law, instead diverting the money for personal gain or to fund their businesses. One IRS official acknowledged that frequently small businesses are undercapitalized and use the tax money as operating capital. However, employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a responsibility to deposit in a separate bank account these amounts held “in trust” for the federal government until making a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer’s matching Social Security contributions. Individuals within the business (e.g., corporate officers) may be held personally liable for the withheld amounts not forwarded, and they can be assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Willful failure to remit payroll taxes is a criminal felony offense punishable by imprisonment of not more than 5 years, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The employee is not responsible for the employer’s failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the general fund, as we discussed in previous reports. Over time, the amount of this subsidy is significant. As shown in figure 2, over a third of the total tax amount owed by civilian contractors was for unpaid payroll taxes and over 40 percent was for corporate income taxes. The remainder consisted of individual income taxes, and other taxes. As discussed later in our case studies, some of these contractors also owe state tax debts. A substantial amount of the unpaid federal taxes shown in IRS records as owed by civilian contractors had been outstanding for several years. As reflected in figure 3, over half of the unpaid taxes owed by civilian contractors were for tax periods prior to calendar year 2000. Prompt collection of unpaid taxes is vital because, as our previous work has shown, as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is due, in part, to the continued accrual of interest and penalties on the outstanding federal taxes, which, over time, can dwarf the original tax obligation. The amount of unpaid federal taxes reported above does not include all tax debts owed by the civilian agency contractors due to statutory provisions that give IRS a finite period under which it can seek to collect on unpaid taxes. Generally, there is a 10-year statutory collection period beyond which IRS is prohibited from attempting to collect tax debt. Consequently, if the contractors owe federal taxes beyond the 10-year statutory collection period, the older tax debt may have been removed from IRS’s records. We were unable to determine the amount of tax debt that had been removed. The amount of unpaid federal taxes we identified among civilian agency contractors—$3.3 billion—is likely understated for three main reasons: (1) we intentionally limited our scope to contractors with agreed-to federal tax debt for tax periods prior to 2004 that had substantial amounts of both unpaid taxes and payments from civilian agencies; (2) FMS disbursement files did not always contain the information we needed to determine whether the contractors owed federal taxes; and (3) the IRS taxpayer account database contains errors, and the database reflects only the amount of unpaid taxes either reported by the taxpayer on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. To avoid overestimating the amount owed by government contractors, we took a number of steps to exclude unpaid federal taxes that federal contractors recently incurred or that are not individually significant. For example, some recently assessed tax debts that appear as unpaid taxes through a matching of PACER and IRS records may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid, abated, or both within a short period. We attempted to eliminate these types of debt by focusing on unpaid federal taxes for tax periods prior to calendar year 2004 and eliminating tax debt of $100 or less. We also eliminated all tax debt identified by IRS as not being agreed to by the contractor. Additionally, we eliminated contractors with tax debt that received payments of $100 or less during fiscal year 2004. Regarding the completeness of FMS disbursement information, we found that some contractors paid through FMS could not be identified due to blank or obviously erroneous TINs, such as TINs made up of all zeros or all nines. The lack of TINs prevented us from determining whether contractors had unpaid federal taxes and, if so, the amount of unpaid taxes owed by the contractors. Additionally, as will be discussed in more detail in a later section of this report, FMS does not maintain detailed electronic payment information for a large disbursement system—Fedwire—that also makes disbursements to contractors, and thus the value of unpaid taxes associated with contractors paid through that system could not be determined. As we have previously reported, IRS records contain errors that affect the accuracy of taxpayer account information. Consequently, some of the $3.3 billion may not reflect true unpaid taxes, although we cannot quantify this amount. Nonetheless, we believe the $3.3 billion represents a conservative estimate of unpaid federal taxes owed by civilian contractors paid through FMS. Also limiting the completeness of our estimate of the unpaid federal taxes of civilian contractors is the fact that the IRS tax database reflects only the amount of unpaid taxes either reported by the contractor on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. During our review, we identified instances in which civilian contractors failed to file tax returns for a particular tax period and, therefore, were listed in IRS records as having no unpaid taxes for that period. Further, our analysis did not attempt to account for businesses or individuals that purposely underreported income and were not specifically identified by IRS. According to IRS, underreporting of income is the largest component of the roughly $300 billion tax gap. Preliminary IRS estimates show underreporting accounts for more than 80 percent of the total tax gap. Consequently, the true extent of unpaid taxes for these businesses and individuals is not known. There is a large tax levy collection gap between the maximum potential levy we calculated and the amount FMS actually collected under the FPLP. We estimate that if there were no legal or administrative provisions that remove some tax debt from the levy program and if all PACER contractor payments were subjected to a 15 percent levy to satisfy all the unpaid taxes of those civilian contractors, FMS could have collected as much as $350 million in fiscal year 2004. However, during fiscal year 2004, FMS collected about $16 million from civilian contractors—or about 5 percent of the approximately $350 million maximum levy collection estimate we calculated. As discussed earlier in this report, because some unpaid federal taxes are excluded due to statutory requirements, IRS and FMS will never be able to close the levy collection gap completely. For example, over half of the $3.3 billion in tax debt was coded by IRS as being excluded from the levy program for statutory reasons, including contractors being in bankruptcy, having installment agreements, or awaiting the completion of the required initial legal notifications. However, many improvements can be made to narrow the tax levy collection gap. We found that a vast majority of the collection gap is attributable to debts that are excluded from TOP because of current law and IRS policies. While we will provide an overview of the exclusions later in this report, we will examine in detail in a later report the accuracy and reasonableness of the exclusions and IRS’s applications of those exclusions. The remaining gap— to be covered in detail in this report—between what could be collected and what was actually collected is attributable to the fact that not all FMS payments could be matched against unpaid federal taxes for levy. We estimate that the federal government could have collected at least $50 million more in unpaid federal taxes in fiscal year 2004 using the FPLP if all PACER contractor payments could be matched against tax debts in TOP. The actual collection of unpaid federal taxes from the levy program does not approach our maximum estimate largely because IRS excludes—either for statutory or policy reasons—almost two-thirds of unpaid federal taxes from potential levy collection. Since we last reported on DOD contractors that abused the federal tax system, IRS has added about $28 billion in unpaid federal taxes to the levy program from categories it formerly excluded (from its total population of all tax debts). Despite these efforts, the amount that is excluded from the levy program is significant. Our analysis of all tax debt recorded by IRS—$269 billion in unpaid taxes— including amounts owed by civilian contractors, indicates that $171 billion was excluded from potential levy collection as of April 2005. For the civilian contractors in fiscal year 2004, these exclusions accounted for over 80 percent of the levy collection gap. As shown in figure 4, $71 billion (26 percent) of all unpaid federal taxes are excluded from the levy program as a result of statutory requirements, while another $100 billion (37 percent) of unpaid federal taxes are excluded due to IRS policy decisions, leaving approximately $98 billion (37 percent) potentially subject to collection through the levy program. While the exclusion of unpaid federal taxes from the levy program is justified depending on the circumstances, it nevertheless results in the loss of potentially hundreds of millions of dollars in tax collections from the levy program. In addition to not sending the majority of unpaid federal taxes to the levy program, FMS records indicate that as of September 30, 2004, about 70 percent of the unpaid taxes that IRS submitted to TOP had not yet completed the collection due process requirements necessary to allow the levying of payments to begin. As a result, only a small portion of unpaid federal taxes is available for immediate levy. We will examine in detail in a later report the accuracy and reasonableness of the IRS exclusions and IRS’s applications of those exclusions. What follows is a more detailed description of the amounts and types of unpaid taxes excluded from the FPLP for statutory and policy reasons, as well as a detailed discussion of the limitations associated with much of the unpaid federal taxes that are referred to the FPLP. According to IRS records, as of April 2005, IRS had coded about $71 billion of unpaid federal taxes as being legally excluded from the levy program. As shown in figure 5, IRS records indicate that bankruptcy and taxpayer agreements—including installment or offer in compromise (OIC) agreements—each account for about a quarter of all statutory exclusions. Another $27 billion (38 percent) of the $71 billion in statutory exclusions is due to IRS not having completed all initial taxpayer notifications required by law before a tax debt could be referred to TOP—these are cases that IRS refers to as being in notice status. For tax debt in notice status—the first phase of IRS’s collection process— IRS sends a series of up to four separate notices to tax debtors asking them to pay their tax debt. Upon receipt of each of the notices, the debtors have a minimum of 30 days to respond in various ways: disagree with IRS’s assessment and collection of tax liability and appeal negotiate with IRS to set up an alternative payment arrangement, such as an installment agreement or an offer in compromise; apply to IRS for a hardship determination, whereby tax debtors demonstrate to IRS that making any payments at all would result in a significant financial hardship; or elect to pay off the debt in full. Each time the debtor responds to a notice, the matter must be resolved before IRS can proceed with further notices or other collection actions. For example, IRS must determine whether to accept or reject an installment agreement or determine that the tax debtor is in financial hardship before proceeding with the collection process. During this entire notice phase, IRS is required to exclude the tax debt from the levy program. IRS does not begin further collection action, for example, the unpaid federal taxes are excluded from levy, until the series of initial notifications are complete. IRS also sends out an annual notification letter requesting payment of the unpaid federal taxes. In addition to legal restrictions, IRS makes policy and operational decisions that exclude about $100 billion in unpaid tax debts from the levy program. Categories of unpaid tax debts IRS has coded as being excluded due to policy decisions include those of tax debtors with financial hardship, tax debtors working with IRS to voluntarily comply, and tax debtors under active criminal investigation. Figure 6 shows that slightly over half ($51 billion) of all policy exclusions are due to IRS’s determination that the tax debtor is in financial hardship. Another 40 percent ($40 billion) of the policy exclusions include tax debtors who are deceased and those tax debtors that have filed appeals, claims, or amended returns. About 7 percent ($7 billion), referred to as tax administration exclusions, is excluded from the levy program because an IRS official is working to encourage the affected tax debtor to voluntarily pay the federal taxes owed. About 2 percent ($2 billion) are excluded due to active criminal investigations. Since our 2004 report on DOD contractors who abuse the tax system, in which we recommended that IRS change or eliminate policies that prevent businesses and individuals with federal contracts from entering the levy program, IRS has taken specific actions to include more tax debt in the levy program. Specifically, IRS submitted an additional $28 billion to the levy program by removing many of the systemic exclusions for cases being actively pursued by IRS officials for collection (i.e., those excluded for tax administration purposes). As a result of these and other improvements (including DOD submitting more of its payments in the levy program), collections from contractor payments under the levy program increased over 200 percent in fiscal 2004 over fiscal 2003. Collections continued to increase in the first half of fiscal year 2005. Our past audits have indicated that IRS records contain coding errors that affect the accuracy of taxpayer account information—including exclusion categories. While we did not evaluate the appropriateness of the exclusion categories in this report, the categories used by IRS are only as good as the codes IRS has input into its systems. In our previous work on DOD contractors with tax debt, we found that inaccurate coding at times prevented both IRS collection action and cases from entering the levy program. Therefore, the effective management of these codes is critical because if these exclusion codes (such as codes identifying a contractor as being in bankruptcy or having an installment agreement) remain in the system for long periods, either because IRS delays processing taxpayer agreements or because IRS fails to input or reverse codes after processing is complete, cases may be needlessly excluded from the levy program. FMS records indicate that as of September 30, 2004, about 70 percent of the tax debt IRS sent to the levy program is not available for immediate levy because IRS has not completed all the necessary legal notifications before the levying of payments can begin. In addition to the initial series of notice letters that are sent out at the beginning of IRS’s collection efforts, IRS is required to send the debtor an additional notice of intent to levy that includes information regarding the tax debtor’s right to a hearing prior to levy action—also referred to as a collection due process notice. Although the tax debtor has up to 30 days to respond to this notice under the law, IRS has chosen to wait 10 weeks before proceeding with collection actions, such as levying. Until the due process notification and waiting period have been completed, a tax debt may be submitted to TOP but is not subject to immediate levy. For civilian contractors, IRS generally does not initiate the collection due process notifications until FMS identifies that the contractor is to receive a federal payment. Once the debtor receives the notice of impending levy, IRS gives the debtor up to 10 weeks to respond to the notice. As in the initial notice process, the debtor can respond to IRS by disagreeing with IRS’s assessment (in this case, filing for a due process hearing), negotiating with IRS to set up an alternative payment arrangement, applying for a hardship determination, or making payment in full. If a tax debtor does not respond to IRS and take advantage of those options within the notification period, IRS will instruct FMS to start levying future payments. The tax debt in the levy program is then coded for immediate levy. For future payments, FMS will proceed with the continuous levy by reducing each scheduled payment to the tax debtor by 15 percent—or the exact amount of tax owed if it is less than 15 percent of the payment—until the tax debt is satisfied. Not having tax debt ready for levy results in the loss of millions of dollars of tax revenue for the federal government. For example, for our 50 case studies we identified payments totaling $1.6 million in which the TIN of the contractor matched an IRS tax debt, but no levy was taken because IRS had not yet completed the collection due process activities. This situation contributes to these contractors facing little or no consequences for their abuse of the federal tax system. IRS has an automated process in place by which, once a match is made against a tax debt in the levy program, a due process notice is automatically sent to the contractor. However, the payments made between the time of the initial match and when IRS completes its due process notification process—usually 10 weeks—cannot be levied and the potential collections are lost to the federal government. Additionally, if the tax debtor files for a due process hearing once it receives the notice, the tax debt will continue to be excluded from levy until the process—which could take months—is complete. Prior to 1998, IRS was authorized to levy a payment immediately upon matching a tax debt with a federal payment so long as the collection due process notice had been sent. At that time, IRS did not have to wait before proceeding with the levy. This allowed the levy program to capture the payment before it was made to preserve the government’s right to the payment while providing the contractor a postlevy due process. However, the IRS Restructuring and Reform Act of 1998 requires that debtors be afforded an opportunity for a collection due process hearing before a levy action can take place. To comply with this provision, IRS has chosen to wait a minimum of 10 weeks for the tax debtor to respond to the collection due process notice. However, IRS’s 10-week waiting period causes the federal government to miss levying some contractor payments. IRS has acknowledged that the delay in initiating the due process notice can result in lost collections and is investigating ways to begin the process earlier. The joint task force established after our previous audit has supported making the due process for the FPLP a postlevy process. This would allow IRS to levy payments when first identified and provide contractors with procedural due process remedies afterwards. To expedite the notification, IRS officials stated that they had begun matching new DOD contracts valued at over $25,000 against tax debt and sending out collection due process notifications at that time rather than waiting until payments are made. To address this same issue, the task force is also exploring avenues to combine the collection due process notice with the last of its initial notification letters sent to tax debtors. This would allow IRS to have all tax debt legally ready for levy prior to it being sent to TOP to be matched against federal payments. We fully support the task force’s and IRS’s efforts to increase the amount of tax debt that is ready for immediate levy. FMS has contributed to the tax levy collection gap by not taking a proactive stance in overseeing and managing the levy program. GAO’s Standards for Internal Controls in the Federal Government considers a positive control environment—which includes the establishment of mechanisms to monitor or oversee program operations—to be the foundation for all other standards. For FMS, such management control and oversight is critical in its role as the federal government’s debt collector and chief money disburser. However, because of a lack of oversight, FMS did not detect and have agencies correct obviously inaccurate information for tens of billions of dollars in payments to contractors, and therefore was not able to match these payments against tax debts for potential levy. Further, because of a lack of proactive management, FMS did not send tens of billions of dollars more in payments to the levy program. We estimate that these deficiencies resulted in at least $50 million in lost levy collections from civilian agency contractors during fiscal year 2004. Table 1 provides a breakdown of the deficiencies that result in payments not being subject to levy. Further discussion of these deficiencies will be provided in detail later in this report. In addition to these deficiencies, FMS also faces design challenges in the levy program that limit its effectiveness at collecting unpaid taxes. These challenges include the difficulty in matching the name of the contractor recorded in the payment files to the name recorded in IRS’s tax records and the difficulty in levying vendors paid with government purchase cards. FMS also has not implemented a provision of the American Jobs Creation Act of 2004, which allows the federal government to levy up to 100 percent of payments to contractors with unpaid federal taxes. FMS has not updated its TOP database to capture payments from about 150 agency paying stations, resulting in $40 billion of fiscal year 2004 civilian agency contractor payments being excluded from potential levy. Although effective internal control would generally include oversight of key agency functions, FMS did not perform the management oversight necessary to identify that a significant portion of all its disbursements were not included in the levy program. Of the $40 billion not sent to TOP, we determined that approximately $9 billion in payments were made to civilian contractors with tax debts, none of which could be levied. Federal agencies may have multiple offices or locations that perform accounting for and preparation of payment information, referred to as agency payment locations or paying stations. For a payment to be matched against tax debts for the purpose of levy, the paying station from which the payment originates needs to be programmed into the TOP database. If a paying station is not in the TOP database, TOP considers that location to be excluded from the levy program, and thus payments from that location will not be matched against unpaid federal taxes for potential levy. The approximately 150 paying stations not included in TOP are paying stations for portions of a majority of federal departments, including the Departments of Homeland Security, the Interior, Justice, State, the Treasury, and Health and Human Services. An FMS official stated that at the time FMS implemented TOP in the 1990s, it had a centralized monitoring system to verify that payments from all payment locations were included in TOP. According to the official, after the initial group of paying units was incorporated into TOP, FMS did not take steps to ensure that the TOP database was up to date and that payments from new payment locations were incorporated into TOP. FMS was unaware that a large percentage of its disbursements were being excluded from potential levy. Since we brought the problem to their attention, FMS officials stated that efforts are under way to update TOP for the paying stations we identified as being excluded from the levy program. The officials also stated that they plan to reinstate the centralized monitoring to ensure that paying stations are updated in TOP so that payments from these stations would be available for potential levy. During fiscal year 2004, FMS disbursed over $17 billion in payments to civilian agency contractors without TINs or with obviously inaccurate TINs. Valid TIN information is critical to the levy program because payments lacking this information cannot be matched against tax debts. The DCIA requires executive agencies to obtain TINs from contractors and to include TINs on certified payment vouchers, which are submitted to Treasury. Without a proper TIN, payments cannot be levied. We found that payment records with blank or obviously inaccurate TINs in the TIN fields are prevalent in the payment files submitted to FMS by some agencies. For example, over half of payments at one agency and over three- quarters of payments at another agency were made to contractors that had blank or obviously erroneous TINs, such as TINs made up of all zeros or all 9s. While certain vendors are exempt from the requirements to have a TIN, the exemptions are rare and are generally limited to foreign companies providing goods and services to federal agencies in a foreign country or companies performing classified work. However, FMS does not gather information to determine whether the payments to contractors without TINs or with obviously inaccurate TINs are exempt from the TIN requirement or that all nonexempt payments include TINs. FMS officials stated that the responsibility for gathering and submitting TIN information was solely that of the paying agency. In subsequent audit efforts, we will evaluate selected agencies’ controls over obtaining and submitting TIN information for all nonexempt payments. FMS officials stated that FMS tabulates payment records with obviously inaccurate TINs by agency to compile a monthly TIN Compliance Report. This report is used to monitor agencies that send in payment requests with obviously inaccurate TINs. According to FMS officials, in cases of significant noncompliance, FMS encourages agencies to send payment files with valid TINs. However, FMS does not enforce the TIN requirement by rejecting agency payments without TINs or requiring the agencies to certify that the payments not containing TINs meet one of the TIN exclusion criteria. As a result, agencies continue to submit payment requests without TINs, which cannot be levied to collect unpaid federal taxes. We found that some civilian agency contractors without TINs or with obviously inaccurate TINs in the agency payment files received payments during fiscal year 2004 and had unpaid federal taxes. For example, FMS paid about $700,000 to one contractor with an invalid TIN. Based on investigative work, we were able to determine that this contractor had failed to pay all its payroll taxes and owed more than $50,000 in unpaid taxes. Had the payment file contained a TIN and if the tax debt were subject to immediate levy, the government could have collected the full amount of unpaid taxes from this contractor during fiscal year 2004. FMS made disbursements of nearly $3.8 billion in fiscal year 2004 to contractors whose payment files did not contain the name of the contractor receiving the payment. We found that instead of the contractor’s name, the disbursement file name field was either blank or contained only numeric characters. The lack of a name on the payment file does not prevent the payment from occurring because FMS made these disbursements electronically via direct deposit into the contractors’ bank accounts. However, valid name information is critical because the levy program requires a match between both the name and TIN for a levy to occur. The lack of a proper name could have been detected if FMS had conducted a cursory review of the payment files submitted by the agencies. For example, our review readily identified that most of the payment files submitted by the State Department did not contain valid contractor names. In addition, about $3.2 billion of the nearly $3.8 billion we identified as payments made to contractors without names in the payment files were made on behalf of the State Department. Until we brought the matter to their attention, senior officials at both the State Department and FMS were not aware that the State Department’s contractor payments did not contain valid names. At our request, a State Department official investigated and found that the department’s payment systems did contain valid names but that a programming error resulted in the wrong field being sent to FMS as the name field. The official told us that the error in the payment file is not new because the structure of the payment file sent to FMS had remained the same since the 1980s. Once we brought this to the attention of State Department officials, they were quickly able to identify corrective actions, and according to the State Department, they have since corrected the deficiency. Our analysis of FMS payment data found that FMS made disbursements without contractor names, totaling approximately $400 million, to about 2,000 companies that had about $370 million in unpaid federal taxes. FMS’s failure to detect and correct missing names had a direct impact on the levy program. For example, one contractor with unpaid taxes received from the State Department payments totaling over $400,000, which could not be levied because of the missing name. The same contractor also received payments from other civilian agencies. However, because the contractor’s name was included in the payment files from the other agencies, the levy program collected over $50,000 from those payments. If FMS or the State Department had identified and corrected the name problem, an additional $60,000 in unpaid federal taxes from this contractor could have been collected through the levy program. FMS disbursed $5 billion in payments using checks based on agency- submitted payment files that did not contain data in the payment type field during fiscal year 2004. FMS uses the payment type field in the agency- submitted payment files to determine if the payment is required to be included in the levy program. If a payment file record has a blank payment type field, it is not matched in the levy program to collect unpaid federal taxes. As a result, none of the $5 billion in payments we identified as having a blank payment type field could have been levied to collect the contractors’ unpaid federal taxes. FMS lacked the oversight to detect that the payment files submitted by agencies were not adequately coded. After we brought this to management’s attention, an FMS official stated that FMS planned to establish a new centralized program to monitor the completeness of agency information. FMS has not been proactive in including many categories of payments in the levy program, and has therefore kept tens of billions of dollars in contractor payments from being subject to potential levy collection. FMS uses several payment mechanisms to make its disbursements. FMS payment mechanisms (payment categories) include what it refers to as type A, type B, which includes Automated Clearing House-Corporate Trade Exchange (ACH-CTX), and Fedwire. However, FMS has only taken action to include a portion of type B payments in the levy program. FMS has not taken action to include the other categories of payments due to what it considers to be programming limitations. Therefore, none of those payments can be levied to collect unpaid federal taxes. Although it is responsible for the levy program, FMS also could not quantify the magnitude of federal contractor payments that it was not sending to the levy program, nor could FMS estimate the amount of levy collections it was missing because it had not included all payment categories in the program. FMS officials estimated that FMS paid about $11 billion in contractor payments via ACH-CTX in fiscal year 2004, and our analysis identified at least $15 billion in type A contractor payments. The combined amount of those two categories—$26 billion, though likely understated—represents almost 11 percent of all contractor disbursements recorded in FMS’s PACER database. In addition, FMS disbursed approximately $191 billion in Fedwire payments, but FMS could not identify the value of Fedwire contractor payments that were not sent to the levy program. FMS officials stated FMS had not included type A payments in the levy program because it is waiting for a new disbursement system to be deployed. Type A payments are payments whereby the agency certifies the payment in the same file that contains detailed payment information. Although FMS had performed some preliminary studies in 2001 regarding how to send type A payments to TOP, officials were unable to provide information regarding the cost of making system corrections. At that time, FMS was developing a new payment system that it estimated would be completed as early as 2003 and therefore decided not to make the system changes. However, at the time of our audit, the new system was still not fully deployed. Consequently, over the last 4 years the federal government has lost the collections that could have been levied from those payments. FMS officials stated that FMS is continuing to focus on completing the deployment of a new disbursement system, which it now estimates will be fully operational in 2006, rather than including type A payments in its current system. FMS tentatively plans to incorporate type A payments into TOP in calendar year 2006 when its new system is scheduled to be operational. FMS officials stated that FMS does not send ACH-CTX payments to TOP for levy. According to FMS officials, ACH-CTX can be used to pay multiple invoices to a single contractor. However, the structure of the ACH-CTX payments requires that the total payment amount disbursed to the contractor match exactly the total of the invoices that the payment is to cover. If a levy were to take place, the total payment amount would differ from the total amount of the invoices that support the payment. Consequently, FMS officials stated that they cannot levy a portion of the payment. Officials stated that although they could not separately identify them in the PACER database, FMS made about $11 billion in ACH-CTX payments to contractors during fiscal year 2004. FMS officials stated they had not developed an implementation plan or timeline to incorporate ACH-CTX contractor payments into the levy program. As with type A payments, FMS officials stated that FMS is currently focused on completing a new disbursement system prior to incorporating Fedwire payments—payments requiring same-day settlement—into TOP. FMS officials recognized that Fedwire payments, as a whole, are not specifically exempt from levy, though individual Fedwire payments may be exempt. FMS officials stated that the decision to exclude Fedwire payments from the levy program was also based on the limited time window FMS has to send Fedwire payments to the Federal Reserve and the operational and system changes necessary to send those payments to TOP. FMS’s TOP implementation plan, dated January 2005, called for incorporating Fedwire payments into TOP in calendar year 2007, over 10 years after DCIA first required the establishment of a centralized offset program. However, FMS officials recently informed us that they are going to study the costs of submitting Fedwire payments to TOP and may not attempt to include them in the levy program. As a result, FMS officials stated that they no longer have a timeline to incorporate Fedwire payments into TOP. We recognize that submitting Fedwire to the levy program could result in a delay in disbursement, but until FMS fully explores and identifies options for submitting Fedwire payments through TOP, potentially billions of dollars may be disbursed to contractors with unpaid federal taxes without the possibility of being levied. Because payment systems do not identify whether the payment is being made to a business or individual, FMS does not offset contractor payments to collect the unpaid federal taxes owed by individuals. Our analysis determined that civilian agency contractors with unpaid federal taxes who are individuals received payments totaling nearly $2 billion while owing over $290 million in unpaid federal taxes. Agency payment records do not distinguish payments made to individuals, such as those who are self-employed or sole proprietors, from payments made to businesses. IRS decided that due to the lack of distinction between these two types of payments in FMS’s system and the possibility of improperly levying payments, contractor payments should not be levied to satisfy the unpaid federal taxes of individuals. According to IRS, an improper levy could occur because a business’s TIN could be the same as an individual’s Social Security number (the individual’s TIN). According to FMS officials, IRS instructed FMS not to match any contractor payments against unpaid federal taxes owed by individuals for potential levy following discussions between FMS and IRS. However, both FMS and IRS officials have indicated that the potential risk of an improper levy is small. For a levy to occur, a match must exist between the TIN and name in the payment files and the TIN and name control in the tax debt file. FMS indicated it has performed a study and found that only a small number of cases potentially have a business TIN and name that would match with an individual’s TIN and name. After we met with IRS and FMS officials regarding this issue, IRS directed FMS to begin levying contractor payments against tax debts owed by individuals. FMS officials stated that they will need to make system changes to implement this action. FMS faces management challenges in addressing certain limitations in the levy program that result in reduced collections. Specifically, almost $2 billion of contractor payments could not be levied due to difficulties in matching both the name and TIN in the payment records to the tax debt in the TOP database. Additionally, nearly $10 billion in federal payments made via purchase cards to contractors are not subject to levy because the government payment is made to the bank, not the contractor doing business with the government. Finally, FMS faces challenges in implementing a provision contained in the American Jobs Creation Act of 2004, which provides for increasing the amount of levy to a maximum of 100 percent of payments to contractors with unpaid tax debts. Potentially thousands of payments are not levied every week because the TINs and names from the payment records do not match against the names and TINs in TOP for potential levy. Data from FMS’s PACER and TOP databases indicate that about $1.7 billion of payments made to contractors with unpaid federal taxes in TOP could not be levied because the control name supplied by IRS did not match the payee name in PACER. As a result, none of these payments could be levied to collect delinquent tax debt. IRS provides TOP with both a TIN and a “control name” of both companies and individuals with unpaid federal taxes. In general, the control name is the first four characters of an individual’s last name or the first four characters of the business name. TOP analyzes the name in the payment files to determine if it contains the IRS control name. If it identifies the control name (first four characters of the IRS name) anywhere within the name field of the payment file, TOP levies the payment to collect the unpaid taxes. If the control name is not found in the payment record’s name field, TOP records the mismatch on a report that it sends to IRS to identify the mismatches. We reviewed an example of the report containing approximately 2,400 different payments that could not be levied to identify some of the causes for the mismatches. We found that a number of payments were not levied because the payments were made using an individual’s name and the business’s TIN. The following hypothetical example based on an actual case illustrates the difficulty in matching names under the levy program. In one case, the payment was made to an individual doctor, J. Doctor, MD. However, the TIN provided was to the doctor’s practice, Jenny Doctor, MD PA. For IRS, the control name of the business TIN was “JENN.” As a result, although the TIN of the payment matched the TIN of the tax debt, the control name “JENN” did not appear within the payment name “J Doctor.” Because the names did not match, the payments to this contractor were not levied. After we brought this to FMS’s and IRS’s attention, IRS began working with FMS to increase the number of control names it sends to TOP. According to IRS officials, IRS is taking action to begin sending up to 10 additional business control names to FMS to be matched against payment data. IRS officials believed that this should increase the number of matches available under the levy program. IRS is also evaluating additional changes to increase the number of name controls that it sends to FMS for matching with payments to individuals. Due to the structure of the credit card program, whereby payments are made to the government purchase card bank and not directly to contractors with unpaid tax debts, none of the $10 billion in purchase card payments made during fiscal year 2004 were able to be offset or levied. FMS officials have acknowledged the need to address those challenges and stated that FMS has met with certain bank officials and another federal agency regarding how to approach the issues. However, they have not yet determined how to collect federal debts from contractors paid with the government purchase card. The Governmentwide Commercial Purchase Card Program was established to streamline federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from contractors. Governmentwide efforts to promote increased use of purchase cards for small and routine purchases have dramatically increased purchase card spending. As shown in figure 7, purchase card expenditures by civilian agencies increased from nearly $3 billion in fiscal year 1997 to nearly $10 billion in fiscal year 2004. The use of purchase cards has accrued significant benefits to the federal government; however, contractors receiving payments through purchase cards are not currently subject to the levy program. All purchase card payments are made to one of the five banks that issue government purchase cards—Bank One, Bank of America, CitiBank, Mellon Bank, or US Bank. In accordance with standard credit card payment procedures, those banks are responsible for interfacing with Visa or MasterCard and the contractor’s bank to pay for the goods or services provided. This payment process shields the identity of the contractor that is ultimately paid by the civilian agency receiving the goods or services from the levy program. Consequently, the disbursement file contains only the name of the purchase card issuing bank and its TIN and not the contractor that was actually doing business with the government. Without identifying the contractor doing business with the government, the federal government is unable to collect federal debts from payments to these contractors. To demonstrate the effect of payments to contractors using the purchase card, we obtained the National Aeronautics and Space Administration’s (NASA) fiscal year 2004 purchase card transactions and compared the contractors from which NASA purchased goods and services to the IRS unpaid taxes database. During fiscal year 2004, NASA used purchase cards to pay about 12,000 contractors nearly $80 million. According to IRS’s data on unpaid tax debts, over 750 of those contractors had about $440 million in unpaid federal taxes. However, none of the purchase card payments made to these contractors could be levied to collect the unpaid federal taxes. In contrast, in analyzing the TOP database, we found that non-purchase card payments made during fiscal year 2004 to 49 of these same contractors were levied. FMS recognizes purchase card payments as a significant problem for the government’s debt collection and lists the government purchase card program among the payment streams that need to be incorporated into TOP. FMS officials have stated they face both operational and legal issues to incorporate such payments into TOP and that the process of paying the purchase card issuing bank may prevent FMS from using TOP to collect from contractors paid with purchase cards. Until the challenge is thoroughly examined by FMS and IRS and solutions are identified, the federal government will continue to be unable to levy or otherwise collect from tens, if not hundreds, of billions of dollars in payments to civilian contractors. FMS has not fully implemented a new provision, authorized by Congress in 2004, that increased the maximum levy percentage on contractor payments. In October 2004, Congress passed the American Jobs Creation Act 2004 to increase the maximum continuous levy from 15 percent to up to 100 percent of payments to contractors with unpaid taxes. The act specifically increased the continuous levy on payments to vendors for “goods and services” sold or leased to the government. According to IRS, the legal language, which specified that goods and services be subject to the 100 percent levy provision, excludes real estate, such as rent payments, from the new levy requirement. This exclusion presents significant implementation challenges for FMS because the civilian agencies’ payment systems cannot separately identify real estate transactions from other contractor payments. Without the ability to distinguish between these payments, FMS could not implement the new law for civilian payments in such a way as to exempt real estate transactions from the 100 percent levy. FMS officials stated they had recently been able to implement the 100 percent levy provision for certain DOD payments, but were unable to do so for their own disbursements. According to FMS and IRS officials, a specific legislative change is being sought to make real estate payments subject to the new 100 percent levy requirement. We estimate that increasing the levy percentage from 15 to 100 could cause a dramatic increase in collections. We performed a separate analysis of our maximum levy potential estimate as if there were no legal or administrative impediments—estimated at $350 million for a 15 percent levy—and found that if a 100 percent levy rate had been applied in fiscal year 2004, FMS could have collected as much as $800 million from civilian contractors if all payments had been matched against all tax debt. We found abusive and potentially criminal activity related to the federal tax system for all 50 cases that we audited and investigated. The case studies were selected from the population of about 33,000 contractors that were receiving federal payments during fiscal year 2004 and owed over $3.3 billion in unpaid federal taxes as of September 30, 2004, using a non- probability selection approach. The basis for selecting each of the case study contractors was that they all had unpaid taxes totaling more than $100,000 and federal payments totaling more than $10,000. When our audit and investigative work indicated that the 50 contractors we originally selected were related to other entities—defined as entities sharing the same owner or officer or common addresses—we performed additional work to determine whether the related entities and the owners owed tax debts as of September 30, 2004, and received other federal payments during fiscal year 2004. While we were able to identify some related entities, in some cases other related entities might exist that we were not able to identify. In addition, we found that 3 of the 50 case studies involve owners or officers who had been either convicted or indicted for non-tax-related criminal activities, or were under IRS investigation. We are referring the 50 cases detailed in this report to IRS so that a determination can be made as to whether additional collection action or criminal investigations are warranted. For more information on our criteria for the selection of the 50 case studies, see appendix I. The federal government is a large and complex organization, consisting of 15 cabinet-level agencies—one defense and 14 civilian agencies—and numerous independent agencies, administrations, and other entities that collectively spent more than $2.5 trillion in fiscal year 2004. Civilian agencies operate throughout the country and in more than 250 foreign countries, carrying out a multitude of missions and programs. Because civilian agencies contract for a large variety of goods and services to carry out functions as diverse as guarding the nation’s borders, providing medical benefits to veterans, administering justice, and exploring space, it is not surprising that civilian agency contractors with unpaid taxes operate in a large number of industries. The industries are typically wage-based, while the 50 case studies are mostly small, many of them closely held by the owners and officers. Table 2 shows a breakdown for the 50 contractor case studies by the type of goods and services provided. Our audits and investigations of the 50 case study business contractors showed substantial abuse and potential criminal activity related to the tax system. All 48 of the contractors in our case studies that file business tax returns had tax periods in which the contractors withheld taxes from their employees’ paychecks but did not remit them to IRS. Rather, these companies diverted the money to fund business operations, for personal gain, or for other purposes. As discussed earlier in this report, businesses with employees are required by law to remit employment taxes to IRS or face potential civil or potential criminal penalties. Specifically, the act of willfully failing to collect or pay any tax is a felony while the failure to comply with certain requirements for the separate accounting and deposit of withheld income and employment taxes is a misdemeanor. Six of the case study businesses involved owners or officers who were “multiple abusers,” those involved with a group of related companies that owed taxes. The owners or operators of some of these businesses not only failed to have their businesses pay taxes, but several also failed to pay their own individual income taxes, with three individuals having more than $100,000 in unpaid individual income taxes. The related businesses involving these multiple abusers repeatedly failed to pay taxes. For example, several groups of related businesses owed taxes for more than 50 tax periods—one group of about 20 businesses owed taxes for over 300 tax periods. One case study business owner (whose businesses received more than $1 million in federal payments in fiscal year 2004) has a pattern of opening a business, failing to remit at least some payroll taxes, closing the business, and then opening a new business to repeat the same pattern. The owner repeated this pattern for at least three businesses over nearly 20 years. Table 3 highlights 10 case studies with unpaid payroll tax debts. Nine of the 10 cases have unpaid payroll taxes of 10 tax periods or more. The amount of unpaid taxes associated with these 10 cases ranged from nearly $400,000 to $18 million—6 businesses owed more that $1 million in unpaid federal taxes. Our investigations revealed that some owners have substantial personal assets—including commercial real estate, a sports team, or multiple luxury vehicles—yet their businesses fail to remit the payroll taxes withheld from employees’ salaries. Several owners owned homes worth over $1 million—one owner had over $3 million and another had over $30 million in real estate holdings. Others informed our agents that they diverted payroll taxes they did not remit to IRS for personal gain or to fund their business, while others were engaged in activities that indicated potential diversion of payroll taxes for personal gain. For example, one owner transferred the payroll taxes he withheld from employees to a foreign bank account and was using the money to build a home in that country, while another contractor doubled the salary of an officer in a 5-year period to over $750,000 at the same time that the business failed to remit payroll taxes and declared losses of more than $2 million. Another purchased a number of multimillion-dollar properties and an unrelated business at the same time that his many businesses owed taxes, while yet another owner purchased, within a 2-year period, four vehicles totaling nearly $200,000 after the owner’s business started accumulating unpaid tax debts. IRS has taken some collection actions against the contractors in our case studies, but has not been successful at collecting the unpaid taxes. For example, we found that in all 10 cases shown in table 3, IRS has assessed trust fund penalties on the owners or officers for willful failure to remit to the government amounts they withheld from their employees’ salaries. However, as we have previously reported, IRS seldom collects on trust fund penalties. As of September 30, 2004, the balance on the trust fund penalties owed by the owners or officers of the 10 case studies was over $19 million. IRS has also taken some collection actions against all 10 contractors, such as placing liens on the assets of the companies or owners. Although some of the owner/officers had substantial assets, including expensive homes and luxury automobiles, the information we reviewed did not identify that IRS has performed seizures of these assets. However, we identified that 3 of the 10 owners or officers had been convicted or indicted for non-tax-related offenses or were under active IRS investigation for tax-related offenses. The following provides illustrative detailed information on several of these cases. Case 1: This case includes many related companies that provide health care services to the Department of Veterans Affairs, for which they received over $300,000 in payments during fiscal year 2004. The related companies have different names, operate in a number of different locations, and use at least several other TINs. However, they share a common owner and contact address. The businesses collectively owed more than $18 million in tax debts—of which nearly $17 million is unpaid federal payroll taxes dating back to the mid-1990s. IRS has assessed a multimillion-dollar trust fund penalty for willful failure to remit payroll taxes on each of two officers. During the early 2000s, at the time when the owner’s business and related companies were still incurring payroll tax debts, the owner purchased a number of multimillion-dollar properties, an unrelated business, and a number of luxury vehicles. Our investigation also determined that real estate holdings registered to the owner totaled more than $30 million. Case 2: This case comprises a number of related entities, all of which provide waste collection and recycling services. These entities received fiscal year 2004 payments from the Department of Justice totaling over $700,000, about half of which is from purchase card payments, while owing in aggregate over $2 million in tax debt. These taxes date to the late 1990s and consist primarily of payroll taxes. Despite the fact that the company reportedly used legally available means to repeatedly block federal efforts to file liens against the company, liens totaling more than $1 million exist against the company. IRS has also assessed trust fund penalties against the two officers. At the same time that the entities were incurring the tax debt, cash withdrawals totaling millions of dollars were made against the business’s bank account. Further, since the company started owing taxes, the owner had sold real estate valued at over $1 million. The executives of these entities also drive late-model luxury or antique automobiles. Recently, the company started to make payments on its taxes. Case 3: This case includes several nursing care facilities, three of which owed taxes—primarily payroll—totaling nearly $9 million. In addition, an owner’s individual income tax debt totaled more than $400,000. One business provides nursing care services to the Department of Veterans Affairs, for which it was paid over $200,000 during fiscal year 2004. An officer of the company has been assessed a multimillion-dollar trust fund penalty for willful failure to remit payroll taxes and was recently arrested on fraud charges. Our investigative work indicates that an owner of the company made multiple cash withdrawals, each valued at tens of thousands of dollars, in the early 2000s while owing payroll taxes, and that those cash withdrawals were used for gambling. We further determined that cash transfers totaling over $7 million were made in a 7-month period in the early 2000s. Case 7: This contractor provided guard and armed security services to the Department of Homeland Security and the Department of Veterans Affairs, for which it was paid over $200,000 during fiscal year 2004. This business has a history of noncompliance with federal tax laws. Specifically, the business was consistently delinquent in paying its taxes since the late 1990s and has not filed all its income and payroll tax returns for a number of years in the late 1990s. The owner of this business also has not filed individual income tax returns for a number of years since the late 1990s. In the last 1-year period that the business made payroll tax deposits, the business reported that it owed nearly $80,000 in payroll taxes but made payments totaling less than $4,000— about one-twentieth of the taxes owed. At the same time that the owner withheld but failed to remit payroll taxes, the owner diverted the money into a foreign bank account to build a house overseas. Case 8: During fiscal year 2004, this company provided consulting services to the Smithsonian Institution, for which it received over $200,000. Starting in the late 1990s, the company did not remit to the government all the money it withheld from its employees’ salaries. However, at about the time the company was failing to remit the taxes, it nearly doubled one officer’s salary to over $750,000. IRS assessed a trust fund penalty on the officers of this company for willfully failing to remit payroll taxes withheld from their employees’ salaries. Those officers own homes valued at millions of dollars in exclusive neighborhoods in a large metropolitan area and several late-model luxury vehicles. In addition to problems with paying federal taxes, contractors in at least 9 of the 10 case studies had unpaid state and or local tax debt. We determined that the amount and severity of the unpaid state and or local taxes were significant enough for state and local tax authorities to file liens against those contractors. As we will be reporting in a related product, neither the states nor FMS has pursued potentially beneficial agreements to authorize the levying of federal payments, including contractor payments, to satisfy delinquent state tax debts. The 50 case studies we selected illustrate FMS’s inability to collect the maximum levy amount. Although we found that payments to a number of contractors were not levied because IRS excluded their tax debts from TOP for at least a part of fiscal year 2004 for statutory or policy reasons, many others were not levied because of FMS’s lack of effective oversight or proactive management of the levy program. One case study contractor in particular illustrated the problems associated with the levy program that we discussed earlier in this report. This contractor received $4 million during fiscal year 2004, but only about $600,000 of those payments were levied. Of the remaining $3.4 million that was not levied, about two-thirds was not levied because the tax debt was either not referred to TOP or it was referred to TOP but it was still in the notice process during the first 7 months of fiscal year 2004. The remaining one-third was not levied because the name provided in the payment files did not match the IRS control name in TOP or because payments were made using one of its specialized mechanisms. We estimate that if all the tax debt and all of the payments of the 50 case studies were subjected to a levy of 15 percent, FMS could have collected about $3.8 million in unpaid federal taxes in fiscal year 2004. In contrast, FMS actually collected $240,000 from these case study contractors. In the current environment of federal deficits and rising obligations, the federal government cannot afford to leave hundreds of millions of dollars in taxes uncollected each year. However, this is precisely what has been occurring with respect to the FPLP, which our work shows has largely failed to approach its potential. The levy program has thus far been inhibited from achieving its potential primarily because substantial tax debt is not subject to levy and because FMS, the nation’s debt collector, has exercised ineffective oversight and management of the program. Further, by failing to pay taxes on their income or diverting the payroll taxes withheld from their employees’ salaries to fund business operations or their own personal lifestyles, contractors with unpaid tax debts effectively decrease their operating costs. The lower operating costs provide these individuals and their companies with an unfair competitive advantage over the vast majority of companies that pay their fair share of taxes. Federal contractors should be held to a higher degree of responsibility to pay their fair share of taxes owed because they are being paid by the government, and the failure to effectively enforce the tax laws against them encourages noncompliance among other contractors as well. The federal government will continue to lose hundreds of millions of dollars in tax collections annually until actions are taken to send all payments to the levy program, ensure that all payments have the information necessary to allow them to be levied, and establish a proactive approach toward managing the levy program. To comply with DCIA, further implement the Taxpayer Relief Act, and support the federal government’s efforts to collect unpaid federal taxes, we recommend that the Commissioner of the Financial Management Service take the following 18 actions: To obtain reasonable assurance that payments from all paying locations are subjected to potential levy in TOP, update the TOP database to include payments from all agency paying locations in TOP for potential levy and develop and implement a monitoring process to ensure TOP’s list of agency paying locations is consistently updated. To obtain reasonable assurance that payment files contain a TIN for each payment requiring a TIN, enforce requirements that federal agencies must include TINs on all payment vouchers submitted to FMS for disbursement or expressly indicate that the contractor meets one of the criteria that exempts the contractor from providing a TIN and develop and implement procedures to review payments submitted by paying agencies to verify that each payment has either a TIN or a certification that the contractor is exempt from providing a TIN. To obtain reasonable assurance that all payment files submitted by agencies contain a contractor’s name, develop procedures to evaluate payment files to identify payments with blank or obviously notify agencies of deficiencies in payment files regarding blank or obviously inaccurate name fields; collaborate with agencies submitting payment files with blank or obviously inaccurate names in the name field, including the State Department, to develop and implement procedures to capture the contractors’ names in the payment files; and reject agency requests for payments with blank or obviously inaccurate names. To obtain reasonable assurance that payment files contain a payment type and thus, if appropriate, are subject to a levy, instruct all agencies that they must indicate a payment type on all implement monitoring procedures to verify that all payments indicate payment type. To obtain reasonable assurance that all categories of eligible payments to contractors with unpaid federal taxes are subjected to the TOP levy process, develop and implement procedures to submit type A payments to TOP for potential levy, develop and implement procedures to submit ACH-CTX payments to TOP for potential levy, and develop and implement procedures to submit Fedwire payments to TOP for potential levy. To collect unpaid taxes of individuals, make changes to TOP to levy contractor payments to collect the unpaid federal taxes owed by individuals. To ensure that more payments are matched against tax debt in TOP, take actions necessary to incorporate IRS’s expanded list of control names into TOP. To address challenges of collecting unpaid taxes of contractors paid using purchase cards, in conjunction with IRS, monitor payments to assess the extent to which contractors paid with purchase cards owe assess alternatives available to levy or otherwise collect unpaid taxes from those contractors. To address challenges associated with implementing the authorized increase of the levy to 100 percent, work with IRS to determine steps necessary to implement the increased levy percentage. Finally, we recommend that the Commissioner of Internal Revenue evaluate the 50 referred cases detailed in this report and consider whether additional collection action or criminal investigations are warranted. We received written comments on a draft of this report from the Commissioner of Internal Revenue (see app. III) and the Commissioner of the Financial Management Service (see app. IV). In responding to a draft of our report, IRS agreed that continued efforts are needed to improve and enhance the use of the levy program as a tool to deal with contractors that abuse the tax system. IRS noted that it had taken or was taking a number of actions toward this goal. For example, IRS stated that it had begun, with DOD’s assistance, to issue collection due process notices to DOD contractors at the time of contract award rather than after a contract payment is made, thereby allowing IRS to levy more DOD contractor payments without delay. IRS stated that it planned to expand this process to contractors at other agencies later in 2005. IRS also stated that it is working to change its notice process so that more debts can be ready for levy at the time of inclusion in TOP. IRS reiterated the progress it has made to remove systematic exclusions, resulting in an additional $28 billion in tax debts being included in the FPLP, which we noted in our report. These actions have resulted in the federal government collecting, in the first 7 months of fiscal year 2005, $12.2 million in unpaid tax debts from civilian contractors—a nearly threefold increase from the same period in fiscal year 2004. IRS further stated that it would continuously evaluate its policies so that it does not unnecessarily exclude tax debts from the levy program. IRS concurred with our finding that the matching of the TIN and name of contractor payments against records of unpaid federal taxes could be improved, and stated that it will begin sending a greater number of control names—up to 10 variations of the contractor’s name as recorded in IRS’s files—to FMS to match against FMS’s payment data. IRS also stated that it was working to develop a consent-based TIN verification system for contractors doing business with the federal government and that it anticipated implementation of this system later this year. We believe that the completion of these actions can significantly improve collections of outstanding federal tax debt through the levy program. With respect to the report’s recommendations, IRS agreed to work with FMS and other agencies through the Federal Contractor Tax Compliance Task Force (FCTC) to conduct further analysis of the significant challenge presented by contractors paid with purchase cards. IRS also stated that as of April 2005, the 100 percent levy provision had been implemented with respect to DOD contractors paid through DOD’s largest payment system, and that IRS was working with Treasury on a technical correction to allow the 100 percent levy on all federal contractors. Finally, IRS agreed with our recommendation to review the 50 contractors discussed in our report to determine what additional actions are warranted. In its response to a draft of our report, FMS generally agreed with many of our findings and recommendations. However, FMS stated that we mischaracterized its role in the levy process, and that primary responsibility rests with IRS. FMS also did not concur with our conclusions that its oversight and management of the program were ineffective. Additionally, FMS disagreed that it had not fully implemented the legislatively authorized increase in the maximum amount of contractor payments subject to levy. FMS also stated that it disagreed with our recommendation that it withhold payments that do not include a valid name and stated that it was not in a position to implement our recommendations with respect to working with IRS regarding issues associated with collecting outstanding federal tax debt from purchase card payments. Finally, FMS stated that the numbers and potential levy collection amounts presented in the report were confusing and could be misleading. We do not believe we mischaracterized FMS’s role in the levy process. On its Web site, FMS states that it “serves as the government’s central debt collection agency, managing the government’s delinquent debt portfolio.” In our opinion, the agency that is responsible for managing the government’s delinquent debt portfolio needs to do so in a proactive manner, which we did not always find to be the case. While we agree that IRS has a key role in the levy process, many of the issues in our report touch at the heart of FMS’s debt collection responsibilities and most of the weaknesses and challenges discussed in this report can only be addressed by FMS. For example, it was FMS that did not send billions of dollars of payments to the levy program because it had no monitoring mechanism in place to determine that over 100 agency paying locations created since the late 1990s were not included in the levy program. Further, it was FMS that did not identify and inform agencies to correct payment information for tens of billion of dollars in payments that did not have the basic information necessary for the payments to be matched against outstanding federal tax debt for potential levy. These findings form the basis of our conclusion that FMS has not exercised effective oversight and management of the levy program. Despite the issues raised in our report, which FMS did not dispute, it disagreed that its management of the program was ineffective. FMS pointed to increased collections from the levy program in fiscal years 2003, 2004, and 2005, to date, as evidence of excellent leadership and program management. However, the recent increase in collections in the levy program is primarily the result of actions stemming from the formation of the FCTC, which was created in response to issues we raised in our February 2004 report on DOD contractors that abused the tax system. Further, the actions that have led to the increased collections were taken by DOD and IRS. Finally, while collections have increased in the last 3 years, the annual totals to date have not been significant given the potential of the program and, in the context of the program’s 8-year life, the annual increases have come about only very recently. In its response, FMS stated that it is not normally in a position to mandate changes to agencies. We disagree. FMS is in a unique position to identify and help correct many of the issues we identified in the program, some of which are relatively simple and could be quickly addressed. For example, it took the Department of State (State) about a month to correct the problem we identified with respect to missing names in the payment file it had been submitting to FMS for payment once we brought the matter to the department’s attention. A programming error appears to have resulted in the names not being in the disbursement files sent to FMS. According to a State official, the department has likely had names of its payment files since the 1980s, and it did not know that the names were not getting to FMS. Because of State’s responsiveness to our finding, FMS is now levying payments from State’s contractors with unpaid taxes. Had FMS provided effective oversight and management of the debt collection program, it could have detected the problem years ago and worked with the State Department to correct it long before our audit. While we agree that agencies should be responsible for the completeness and accuracy of the payment files they send to FMS, we believe FMS should take a more proactive role in identifying issues that impede the program’s ability to maximize collections and work with agencies to resolve such issues. In responding to our report, FMS disagreed with our conclusion that it had not implemented the provision of the American Jobs Creation Act of 2004 authorizing an increase in the maximum amount of contractor payments subject to levy of up to 100 percent. FMS noted that it had made the changes necessary in the levy program to allow for levying at 100 percent, but that it was unable to implement the provision because civilian agencies’ payment records do not separately identify real estate transactions—which are not subject to the 100 percent levy—from other contractor payments. Our report clearly indicates that the 100 percent levy provision had not yet been fully implemented because of a number of challenges, including the determination by IRS that real estate transactions are not subject to the 100 percent levy provision, and that agency pay systems are presently unable to identify real estate transactions from other contractor payments. We also acknowledged in our report that a legislative change is being sought to subject real estate payments to the 100 percent levy provision. Our report describes this issue not as a weakness in the program but, rather, as another challenge that FMS faces in maximizing collections under the levy program. Our report also acknowledges that certain DOD payments are already being levied at the 100 percent maximum. FMS also did not concur with our recommendation to withhold payments that do not include a valid name in the payment record. However, FMS said it would improve monitoring and ensure agencies’ compliance with the requirement to include names, TINs, and payment types on certified vouchers. This is in line with our recommendation, and we commend FMS for its willingness to increase efforts to enforce the requirements. As the State Department’s prompt response to our findings indicates, when weaknesses are identified, such as records without payee names, agencies can take corrective actions, thereby making it unnecessary to withhold payments. However, FMS has had many years to require agencies to improve the data in their payment records but has, until now, not done so. As we point out in the report, in 1997 FMS proposed a rule that would require disbursing officials to reject agency payment requests that do not contain TINs (that is, withhold the payment), yet later rescinded the proposed rule and instead required agencies to submit to FMS implementation plans to achieve compliance with the TIN requirement. Although FMS requested the implementation plans in 1997, it has not been successful in gaining agency compliance. We believe that if FMS had been more proactive, the intervening years since 1997 would have provided FMS and the agencies ample opportunities to take corrective action. As such, we continue to believe FMS needs to take stronger leadership in enforcing the requirements with respect to the completeness and accuracy of information in agency-submitted payment files. In its response, FMS accurately summarizes some of the challenges that we described in the draft report regarding levying government purchase card payments. These challenges are precisely why we recommended that FMS work with IRS and arrive at a solution to subjecting to potential levy or other form of collection the roughly $10 billion in annual purchase card payments made to civilian agency contractors. However, FMS suggested that we instead redirect the recommendation to have GSA work with IRS. In addition, FMS pointed out that the FCTC could also provide valuable assistance in determining the most efficient and effective means of addressing contractors that have unpaid taxes and are being paid via government purchase cards. While we agree that GSA could assist FMS and IRS with this challenge, at the same time, we believe that as the government’s central debt collector, FMS should assume a leadership role in emerging issues such as the rise in purchase card payments, as it has significant implications with respect to its debt collection responsibilities. In our opinion, FMS is the only federal entity with the ability to identify which contractors that are receiving federal payments have leviable tax debt. This is a role FMS plays when it compares the TIN and the name on FMS payments to the list of contractors with unpaid taxes to determine whether the payment should be levied. If FMS worked with the five banks that currently issue government purchase cards to routinely obtain electronic files listing the contractors being paid with purchase cards, FMS could determine which government contractors that are paid with a government purchase card have unpaid taxes. Consequently, we continue to believe that FMS, in conjunction with IRS, would be in the best position to monitor purchase card payments and assess the extent to which contractors paid with purchase cards have unpaid federal taxes, and then to identify solutions to the challenges presented by purchase card payments. Finally, in its response, FMS stated that the numbers and potential levy collection amounts presented in the report are confusing and potentially misleading. Specifically, FMS stated that our reporting of the levy collection gap of $350 million was misleading as it suggested that FMS would be able to collect that amount through the levy program. In our report, we have taken care to clearly note that the levy collection gap is an indicator of the amount of tax debt civilian contractors owe that could be levied from the payments they get from the federal government if all payments for which we have information could be levied against all outstanding federal tax debt. We further note throughout the report that because some tax debts are excluded due to specific statutory requirements, IRS and FMS are presently restricted by law from collecting a significant portion of this estimated amount. We do, however, clearly identify that a portion of the levy collection gap—at least $50 million—that is directly attributable to weaknesses in internal controls and lack of proactive management at FMS. This amount is understated due to the unavailability of Fedwire information at FMS and because we were unable to estimate collections against many payments that did not contain valid TINs and payment types. FMS’s response does not recognize that although IRS has a key responsibility to refer tax debts, FMS has an equally key responsibility—to make all payments available for levy. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of the Financial Management Service, the Commissioner of Internal Revenue, and interested congressional committees and members. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov or Steven J. Sebastian at (202) 512-3406 or sebastians@gao.gov if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To identify the magnitude of unpaid taxes owed by contractors receiving payments from federal agencies disbursed by the Financial Management Service (FMS), we obtained information from both the Internal Revenue Service (IRS) and FMS. To identify taxes owed, we obtained IRS’s unpaid assessment database as of September 30, 2004. To identify disbursements FMS made to contractors, we obtained from FMS extracts of the Payments, Claims, and Enhanced Reconciliation (PACER) database containing data on payments FMS made to contractors via Automated Clearing House (ACH) and by check during fiscal year 2004. PACER contains information such as payee and payment amount for disbursements FMS makes on behalf of federal agencies. To determine the amount of levies that have been collected and the amount of tax debt that has been referred to the Treasury Offset Program (TOP), we obtained from FMS the TOP database as of September 30, 2004. As discussed later in this appendix, we first performed work to assess the reliability of the data provided. To determine the value of unpaid taxes owed by contractors, we matched PACER disbursements coded as “vendor” to the IRS unpaid assessment database using the tax identification number (TIN) field in both databases. This match resulted in the identification of about 63,000 contractors with more than $5.4 billion in unpaid federal taxes. To avoid overestimating the amount owed by contractors with unpaid tax debts and to capture only significant tax debts, we excluded from our analysis tax debts and payments meeting specific criteria to establish a minimum threshold in the amount of tax debt and in the amount of payments to be considered when determining whether a tax debt is significant. The criteria we used to exclude tax debts and payments are as follows: tax debts that IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar year 2004 tax periods, contractors with total unpaid taxes of $100 or less, and contractors with cumulative fiscal year 2004 payments of $100 or less. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid, tax debts that are recently incurred, and tax debts and payments that are insignificant for the Federal Payment Levy Program (FPLP). Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded calendar year 2004 tax debts to eliminate tax debt that may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid or abated within a short period. We further excluded tax debts and cumulative fiscal year 2004 payments of $100 or less because they are insignificant for the purpose of calculating potential levy collection. Using the above criteria, we identified about 33,000 contractors with over $3.3 billion in unpaid taxes as of September 30, 2004. To determine the potential fiscal year 2004 levy collections, we used 15 percent of the payment or total tax debt amount, whichever is less. Our analysis was performed as if (1) all unpaid federal taxes were referred to FMS for inclusion in the TOP database and (2) all fiscal year 2004 disbursements for which FMS maintained detailed information were included in TOP for potential levy. Because some tax debts are excluded from the FPLP due to statutory exclusions, a gap will continue to exist between what could be collected and the maximum levy amount calculated. However, as discussed in the body of the report, the potential levy collection amount of $350 million may be understated because we excluded, by design, specific tax debts and payment amounts from the calculation of levy, and missing data in FMS’s disbursement information prevented us from providing the full magnitude of tax debts and potential levy collection. The American Jobs Creation Act of 2004 provided for a 100 percent levy on vendor payments for goods or services sold or leased to the federal government, effective October 2004. If unpaid tax debts and payments to contractors in future years remain consistent with fiscal year 2004 patterns, we determined a potential future levy amount based on a levy ratio of 100 percent of payments or total tax debt amount, whichever is less. To determine the effect of IRS and FMS policies and procedures on the amounts actually collected through the FPLP, we conducted work at both agencies related to their respective roles in the implementation of the FPLP. At IRS, we interviewed agency officials and obtained documentation that detailed the statutory requirements and policy and administrative decisions that exclude certain tax debts from the FPLP. We did not evaluate the accuracy and reasonableness of these exclusions, which will be examined in detail in a later report. At FMS, we reviewed documentation and interviewed agency officials to obtain an understanding of FMS’s FPLP policies, implementing guidance, operating procedures, and internal controls related to the TOP and disbursement operations. We also visited the San Francisco Regional Finance Center where we observed work flow processes. We obtained a copy of the TOP database as of September 30, 2004. The TOP database contains all debt, including tax debt, referred to it by federal agencies, including IRS. FMS uses the TOP database for levying contractor payments. As discussed later, we performed work to assess the reliability of data in TOP. To identify payments to contractors disbursed through the government purchase card, we obtained from the Bank of America the database of purchase card payments made by the National Aeronautics and Space Administration (NASA). We reconciled control totals for this data with Bank of America and the General Services Administration. We restricted purchase card data to one agency to demonstrate the magnitude and effect of issues surrounding levying purchase card payments. To identify indications of abuse or potential criminal activity, we selected 50 civilian contractors for a detailed audit and investigation. The 50 contractors were chosen using a nonprobability selection approach based on our judgment, data mining, and a number of other criteria. Specifically, we narrowed the 33,000 contractors with unpaid taxes based on the amount of unpaid taxes, number of unpaid tax periods, amount of FMS payments, indications that owner(s) might be involved in multiple companies with tax debts, and representation of these contractors across government. We specifically included contractors from NASA and the Departments of Homeland Security (Transportation Security Administration), Justice, State, and Veterans Affairs. These agencies were selected based on a number of criteria: national security concerns; amount of payments to contractors, especially those with tax debts; amount of payments made without TINs, names, or both; amount of levy collected; and amount of payments made with blank pay types. The reliability of TINs and contractor names, and whether the agencies’ payment systems are sufficiently integrated to maximize levy collection, will also be covered in later work. We obtained copies of automated tax transcripts and other tax records (e.g., revenue officer’s notes) from IRS as of December 2004 and reviewed these records to exclude contractors that had recently paid off their unpaid tax balances and considered other factors before reducing the number of businesses to 50 case studies. We performed additional searches of criminal, financial, and public records. In cases where record searches and IRS tax transcripts indicate that the owners or officers of a business are involved in other related entities that have unpaid federal taxes, we performed detailed audit and investigation on the related entities and the owner(s) or officer(s), and not just the original business we identified. In instances where related entities exist, we defined a case study to include all the related entities, and reported on the combined unpaid taxes and combined fiscal year 2004 payments for the original business and all the related entities. We identified civilian agency contract awards using the Federal Procurement Data System. Our investigators contacted some contractors and performed interviews. In addition, while assessing the reliability of the data provided by FMS, we identified nearly $17 billion in payments that contain either no TIN or an obviously inaccurate TIN. To determine whether contractors with no TINs or obviously inaccurate TINs had tax debts, we used investigative techniques to identify some of those contractors’ TINs and, through comparison with the IRS records of unpaid taxes, we determined whether those contractors owed tax debts. On May 9, 2005, we requested comments on a draft of comments on a draft of this report from the Commissioner for Internal Revenue or his designee and from the Commissioner of the Financial Management Service or his designee. We received written comments from Commissioner of Internal Revenue dated May 27, 2005, and from the Commissioner of the Financial Management Service dated May 25, 2005, and reprinted those comments in appendixes III and IV of this report. We conducted our audit work from May 2004 through May 2005 in accordance with generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. For the IRS database we used, we relied on the work we perform during our annual audits of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address the report’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s masterfile to IRS’s general ledger, identified no material differences. For PACER and TOP, we interviewed FMS officials responsible for the databases and reviewed documentation provided by FMS supporting quality reviews performed by FMS on its databases. In addition, we performed electronic testing of specific data elements in the databases that we used to perform our work. Based on our review of FMS’s documents and our own testing, we concluded that the data elements used for this report are sufficiently reliable for the purpose of this report. In instances where we found problems with the data, such as data with missing TINs and names, we include those in this report. We also compared the PACER data to the President’s budget and the TOP data to the IRS unpaid assessment file. Table 3 provides data on 10 detailed case studies. Table 4 provides details of the remaining 40 businesses we selected as case studies. As with the 10 cases discussed in the body of this report, we also found substantial abuse or potentially criminal activity related to the federal tax system during our review of these 40 case studies. The case studies primarily involve businesses with unpaid payroll taxes, some for as many as 35 tax periods. IRS has imposed trust fund penalties for willful failure to remit payroll taxes on the officers of 17 of the 40 case studies. In addition to owing federal taxes, 28 of these 40 case study contractors owed sufficient state tax debts to warrant state tax authorities to file liens against them. As we have done in the body of the report, in instances where the business we selected also had related entities, we considered the business and all related entities as one case study and reported the civilian agency payments and unpaid federal tax amount for all related entities in the table. The following individuals made major contributions to this report: Beverly Burke, Ray Bush, Richard Cambosos, William Cordrey, Francine Delvecchio, F. Abe Dymond, Paul Foderaro, Alison Heafitz, Kenneth Hill, Aaron Holling, Jason Kelly, John Kelly, Rich Larsen, Tram Le, Mai Nguyen, Kristen Plungas, Rick Riskie, John Ryan, David Shoemaker, Sid Schwartz, Esther Tepper, Tuyet-Quan Thai, Wayne Turowski, Matt Valenta, Scott Wrightson, and Mark Yoder. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Tax abuses by contractors working for the Department of Defense, on which GAO previously reported, have led to concerns about similar abuses by those hired by civilian agencies. GAO was asked to determine if similar problems exist at civilian agencies and, if so, to (1) quantify the amount of unpaid federal taxes owed by civilian agency contractors paid through the Financial Management Service (FMS), (2) identify any statutory or policy impediments and control weaknesses that impede tax collections under the Federal Payment Levy Program (FPLP), and (3) determine whether there are indications of abusive or potential criminal activity by contractors with unpaid tax debts. FMS and IRS records showed that about 33,000 civilian agency contractors owed over $3 billion in unpaid federal taxes as of September 30, 2004. All 50 civilian agency contractors we investigated had abusive and potentially criminal activity. For example, businesses with employees did not forward payroll taxes withheld from their employees to IRS. Willful failure to remit payroll taxes is a felony under U.S. law. Further, several individuals own multiple businesses with unpaid federal taxes--one individual owns about 20 businesses that did not fully pay taxes related to over 300 returns. Some contractors purchased or owned millions of dollars of property while they did not remit payroll taxes. These activities were identified for contractors at the Departments of Justice, Homeland Security, and Veterans Affairs; the National Aeronautics and Space Administration; and others agencies. GAO's analysis indicates that if all tax debts owed by, and all payments made to, the 33,000 contractors were included in the FPLP, FMS could have collected hundreds of millions of dollars in fiscal year 2004. However, because only a fraction of all unpaid taxes and a portion of FMS payments are subjected to the levy program, FMS actually collected only $16 million from civilian contractors. For example, about $171 billion of unpaid federal taxes were not sent to the levy program to be offset against payments because of specific statutory requirements or IRS policy exclusions, such as debtors' claims of financial hardship or bankruptcy. Tens of billions of dollars in federal payments were not compared against tax debts for potential levy because FMS did not proactively manage and oversee the levy program. Until we brought it to FMS's attention, FMS did not know that it did not submit $40 billion of contractor payments from some civilian agencies for potential levy. FMS also did not identify payment files that did not contain contractor tax identification numbers, names, or both, resulting in $21 billion in payments to contractors that could not be levied. FMS also excluded billions of dollars from levy because of what it considered programming limitations without taking proactive steps to overcome those limitations. Further, civilian agency purchase card payments to contractors totaling $10 billion could not be levied. Improvements at FMS could result in tens of millions of dollars of additional levies annually.
The PARIS interstate match program was initiated to help state public assistance agencies share information with one another. Its primary objective is to identify individuals or families who may be receiving or having duplicate payments improperly made on their behalf in more than one state. In this voluntary project, the participating states agree to share eligibility data on individuals who are receiving TANF, Food Stamps, Medicaid, or benefits from other state assistance programs. The participating states are primarily responsible for the day-to-day administration of PARIS, and each state designates a coordinator for the project. PARIS uses computer matching to identify improper benefit payments involving more than one state. This process entails comparing participating states’ benefit recipient lists with one another, using individuals’ SSNs. Other items of information are included in the files that the states share, such as the individual’s name, date of birth, address, case number, public assistance benefits being received, and dates that benefits were received. Matches are conducted by the Defense Manpower Data Center (DMDC) in February, May, August, and November of each year.DMDC produces a file of all the SSNs on the list submitted by the participating state that are the same as the SSNs appearing on the list of some other state and provides the matched records, known as match hits, to ACF, who forwards them to the appropriate states. To be considered a working member of PARIS, states agree to participate in at least the August match each year. Once the participating states receive the file of matched SSNs from DMDC, they are expected to send the match hits to the appropriate staff for follow-up or investigation. The staff may take a number of steps to verify information that affects individuals’ eligibility for benefits. These steps include requiring an individual to come into the office to show proof of residency and contacting other states to verify whether the individual is still receiving benefits from those states. Improper benefit payments may be made because of client error, agency administrative error, or fraud and abuse. A client error might occur when an individual receiving program benefits in one state moves to another state but fails to report the move to program authorities. An administrative error could occur when a local benefit worker is informed that the recipient is moving out of the state but fails to update the record. Without PARIS matching, such errors might not be detected until the individual is asked to reverify program eligibility, which could occur as much as a year later. Additionally, the reverification of eligibility might not detect fraud or abuse when a person deliberately obtains benefits in more than one state by providing false information to program authorities. If after investigating the match hit, state or local officials determine that an individual is improperly receiving public assistance benefits in their state, they may initiate action to cut off benefits. In general, to protect the rights of the recipients, administrative due process requirements must be followed before benefits can be cut off. For example, an individual may be given up to 30 days to respond to a formal notice that benefits will be stopped. Moreover, if the recipient can demonstrate that he or she is residing in the state and is eligible for assistance, then benefits may be continued or reinstated. “…will ensure that confidential recipient information received pursuant to this Agreement shall, as required by law, remain confidential and be used only for the purpose of the above described match and for verifying eligibility and detecting and preventing fraud, error and abuse in respective Programs.” Our review focuses on three benefit programs covered by PARIS interstate matches: TANF, Medicaid, and Food Stamps. Benefits for TANF and Food Stamps are provided directly to recipients; however, Medicaid payments are made directly to those who provide health care services, such as MCOs and other health care providers. All three programs are generally administered at the state and local level, but are funded with federal money or a combination of federal and state money. Depending on the state, the same staff in local offices may determine eligibility and benefit levels for all three programs. However, some states administer the TANF and Food Stamp programs separately from the Medicaid program. Table 1 provides a brief description of the three programs. A fourth program—the SSI Program administered at the federal level by SSA—is indirectly related to the PARIS interstate matches. In many states, SSI recipients are automatically qualified to receive Medicaid and are therefore included in PARIS matches. PARIS coordinators in most of the 16 participating states and the District of Columbia told us they believe the interstate match is effective in identifying improper TANF, Medicaid, and Food Stamps benefit payments in more than one state. By eliminating these duplicate recipients from the rolls, states can prevent future improper payments and save program dollars. However, few states tracked the actual savings realized from the PARIS match. Four states and the District of Columbia reported a total of about $16 million in estimated savings from various PARIS matches conducted in 1997, 1999, and 2000. A substantial proportion of the estimated savings was attributed to the Medicaid program. While officials in only three states have compared the costs to the benefits that result, their studies indicate that the matching is cost-beneficial. We prepared our own analysis, which also suggests that PARIS may help states save program funds by identifying and preventing future improper payments. PARIS helps to identify improper benefit payments in bordering and nonbordering states according to most of the PARIS coordinators we spoke with. The February 2001 match identified almost 33,000 instances in which improper payments were potentially made on behalf of individuals who appeared to reside in more than one state. Of these, 46 percent of the potential hits involved Medicaid benefits alone, while the remaining 54 percent involved some combination of TANF, Medicaid, and Food Stamps. However, most of the states did not maintain detailed records on the number of potential match hits that were, in fact, found to be instances of improper payments. Nor were they able to tell us what proportion of improper TANF and Food Stamp payments were due to client error, administrative error, or fraud and abuse. Some PARIS coordinators believe that fraud and abuse may be more common in areas where two states share an urban border. For example, one coordinator told us that individuals living in the District of Columbia metropolitan area could travel in minutes to Maryland and Virginia and apply for benefits in each place. Figure 1 depicts participating states and the District of Columbia and their shared borders. Independent of PARIS, some states conduct interstate matches with bordering states to prevent improper payments caused by either error or potential fraud and abuse. Many of these states now participate in the PARIS match. PARIS coordinators told us that the PARIS approach offers significant advantages over single state-to-state matches. For example, PARIS makes it possible for a state to match with numerous other states by simply submitting a file to a central agency. In addition, a uniform data-sharing agreement covers the exchange, and the DMDC adjusts for incompatibilities between different computer systems. This unified approach can be more efficient than individual state-to-state matches and can help to reduce the expense of matching. In addition to simplifying matches with bordering states, PARIS also facilitates data sharing with nonbordering states. This is important because even when two states do not share a border, improper payments can still be made, whether due to error or deliberate deception, such as fraud and abuse. For example, PARIS officials in New York discovered a woman receiving TANF benefits in New York for five children who were actually living with relatives and receiving benefits in Illinois. Table 2 shows the results from the February 2001 PARIS match for selected states, including nonbordering states. About 80 percent of the match hits listed in table 2 are between states that do not border one another. For example, North Carolina has more match hits with Florida and New York than it does with neighboring Virginia. In addition, in New York and Pennsylvania, match hits with nonbordering states represented 73 percent and 50 percent of their total match hits, respectively. Although both of these states have matched recipient data with bordering states for years, the PARIS match identified numerous instances of potential duplicate benefits in nonbordering states that might not otherwise have been detected. While most states do not track the savings they have achieved or the costs they incurred because of the PARIS match, a small number of states were able to document the results of participating in the project. Four states and the District of Columbia provided us with their estimated savings from participating in PARIS. Three of the states also performed cost-benefit analyses, demonstrating that they found PARIS to be cost-beneficial. Pennsylvania estimated that two quarterly matches in 2000 produced more than $2.8 million dollars in annual savings in the TANF, Medicaid, and Food Stamp programs and achieved a savings-to-cost ratio of almost 12 to 1. About $2.5 million (87 percent) of the total estimated savings are attributed to the Medicaid program. Maryland estimated that its first PARIS match in 1997 produced savings of $7.3 million in the Medicaid program alone. The match identified numerous individuals who were originally enrolled in Medicaid due to their SSI eligibility, but at the time of the match no longer lived in the state. Subsequent matches conducted between November 1999 and August 2000 have produced savings of about $144,000 in the TANF, Medicaid, and Food Stamp programs, with a savings-to-cost ratio of about 6 to 1. Kansas estimated that two PARIS matches in 1999 and 2000 resulted in savings of about $51,000 in the TANF, Medicaid and Food Stamp programs, with a savings-to-cost ratio of about 27 to 1. New York reported that improper payments identified in four matches conducted in 1999 and 2000 produced estimated savings of $5.6 million; however, the state did not collect data on the costs associated with investigating these matches. The District of Columbia estimated that one PARIS match conducted in 1997 resulted in savings of about $311,000 in the TANF and Food Stamp programs; however, officials did not collect savings data for the Medicaid program, nor did they collect cost data. Our discussions with numerous state and federal officials have led us to conclude that the substantial variation in the estimated program savings and savings-to-cost ratios across these states is attributable to a number of factors. These factors, which could also apply to any participating state, include differences in the extent to which state and local officials follow up on (or fail to pursue) match hits and take action to cut off benefits where appropriate; the methods and assumptions states use to estimate their savings; the proportion of match hits that are valid in that they are found to reflect actual improper benefits being paid in more than one state (a higher proportion of valid match hits will generally yield more program savings than a lower rate, and is more likely to be cost-beneficial);the estimated number of months of avoided benefit payments; the size of the recipient population and the monthly benefits provided in each state under the TANF, Medicaid, and Food Stamp programs; how long it takes local office staff or fraud investigators to follow up on match hits; the salary costs of state and local staff involved with PARIS; and the cost to create an automated list of recipients at the state level to be sent to DMDC. Because so few states had analyzed their savings and costs from participating in PARIS, we performed an independent analysis to assess how certain factors might influence the extent to which participating in PARIS could achieve program savings. We studied how certain key variables, such as the number of programs included, the proportion of match hits that are valid, and the estimated number of months of avoided benefit payments could affect the overall savings a state might achieve by participating in PARIS. We used national data where available (such as average benefits paid to recipients for each program). When national data were not available, we used the experiences of five states for our analysis. We used professional judgment to determine the values for several key assumptions in our analysis. Specifically, using a hypothetical example in which 100 match hits are sent to local benefit offices for staff to investigate, we assumed that each match hit requires 2 hours to determine whether benefits are improperly being paid in more than one state and costs $68.97 on average, resulting in a total of $6,897 in salaries and related expenses to follow up on all 100 match hits; the average state cost is about $440 to generate the automated list; 20 percent of the match hits investigated are found to be valid; and program savings come entirely from the future benefit payments that are avoided. (See app. I for a more detailed description of the data and assumptions used in our analysis.) Our analysis suggests that PARIS, as it currently operates, could help save both federal and state program funds. In particular, our analysis indicates that if states include the TANF, Medicaid, and Food Stamp programs in their matching activities, the net savings could outweigh the costs of participation. Using our hypothetical example in which 100 match hits are sent to local benefit office staff for follow-up, we illustrate in table 3 how the savings to a state from participating in one PARIS match could vary depending on (1) the number of programs included in the match and (2) differences in the valid hit rate. The table assumes that the savings for each program accrue for 3 months. If 20 percent of the match hits are valid (they accurately identify 20 out of 100 instances in which improper benefits are being paid in more than one state) and the individuals identified are enrolled in all three programs, the match would produce gross savings of almost $42,000, yielding a savings- to-cost ratio of about 5 to 1. Ultimately, the match would result in net savings of more than $34,000, as shown in table 3, taking into account total match costs of about $7,000. Conversely, costs exceed savings under only one scenario in this example. A valid hit rate of 10 percent, in which the match only includes the Food Stamp program—a rate substantially below what participating states have reported—would result in a net cost to the state of about $3,300. The number of months that future benefit payments are avoided can also influence the amount of savings that result from a PARIS match. Table 4 illustrates the variation in program savings that could result depending on the number of months of future benefits that are avoided and the number of programs matched, given a 20-percent valid hit rate. As the table shows, there are three scenarios under which a state in our analysis would experience a net loss from participating in PARIS. One month’s worth of TANF, Medicaid, or Food Stamp benefits avoided would yield a net cost to the state of between approximately $2,000 and $5,000. However, the match would produce savings in all other possible scenarios. For example, it would yield over $83,000 in gross savings if 6 months of benefits are avoided and the match was performed for all three programs (a savings-to-cost ratio of about 11 to 1). The net savings would be about $76,000. Our analysis assumes that only a small number of match hits are sent for follow-up (100), which results in a small number of valid hits (20). A larger number of valid hits would likely result in greater savings as well. For example, while some states, such as Kansas, with smaller recipient populations have reported relatively small numbers of valid hits and lower levels of savings, other states, such as Pennsylvania and New York, with larger recipient populations have had much higher numbers of valid hits and much greater levels of savings. Although the information provided by states and our analysis indicate that participating in PARIS interstate matches can save federal and state funds, savings are not the only benefit of participating in PARIS. Interstate matches are an important internal control to help states meet their responsibility for ensuring that public assistance payments are only made to or on behalf of people who are eligible for them. In addition, PARIS officials in eight states told us they believe the PARIS interstate matches can help deter people from applying for duplicate public assistance payments. The PARIS project’s interstate matching has helped identify cases of duplicate benefits that otherwise would likely have gone undetected; however, PARIS has been limited by several factors. First, only one-third of the states are participating in the matches, and a large portion of the public assistance population is not covered by the matching. Second, the project has some problems with coordination and communication among project participants. Third, some states are giving inadequate attention to the project. As a result, match hits are not being resolved, and in particular, duplicate payments made for Medicaid beneficiaries receive low priority. Finally, the project cannot help prevent duplicate benefits from occurring in the first place, but can only identify and help stop them after they have started. Only one-third of the states are participating in the PARIS interstate matches. At the time of our review, 16 states and the District of Columbia were participating. As a result, the public assistance records of the other 34 states were not being shared with participating states. These nonparticipating states contain 64 percent of the population that is likely to be eligible for public assistance. We spoke to officials in seven nonparticipating states to learn their reasons for not participating. They noted the state’s preoccupation with more urgent matters, such as implementing new programs or systems, and the fact that information about the project had not reached someone with the interest and authority to get the state involved. They also cited some concerns about the project. These include lack of data showing that participating would produce savings for their state; nonparticipation of bordering states, which are perceived as the most likely sources of valid match hits; lack of written guidance on coordinating the resolution of match hits with other states; and inadequate federal sponsorship of PARIS and the resulting lack of assurance that the project will continue. Efforts by federal agencies to increase participation in the project have been minimal. ACF, the lead agency on the project, has not officially recognized PARIS and devotes very little resources to it. ACF management has not taken actions, such as sending letters to state TANF directors to inform them about the project and encourage them to participate. Also, ACF management has not asked other federal agencies to work with ACF on the project and help get more states involved. CMS, the federal agency that stands to reap the greatest savings from the project, has made no effort to encourage state Medicaid agencies to participate. In 1999, FNS sent a letter to state Food Stamp agencies encouraging them to participate in PARIS interstate matches; otherwise, FNS has had little involvement in the project. This lack of official support for the PARIS project may contribute to the low participation rate. For example, the TANF officials that we spoke with in one of the nonparticipating states who were relatively new to their positions said they had never heard of PARIS. In another nonparticipating state, a Medicaid official told us the state would be much more likely to participate in PARIS if CMS encouraged it to do so. The PARIS project has had various problems with coordination and communication that limit the project’s effectiveness. The problems include the following. Difficulties contacting other states. Benefit workers in four of the five participating states that we visited said they have had difficulties contacting benefit workers in some other states to obtain information to resolve match hits or to get the evidence needed to take action against clients. Problems making contacts occur because the telephone numbers that states provide for obtaining information on individual cases are sometimes inaccurate or never answered or are central numbers that are just the starting point for finding the right person. Submission of incomplete and incompatible data. We noted that some of the states submit data for matching that are likely to increase the number of invalid match hits and the amount of work other states will have to do to determine if match hits are valid. For example, we found that some states include closed cases among the active cases submitted for matching, cases with improper SSNs, or cases that omit the dates clients started receiving benefits. Uncertainties concerning responsibilities for collecting overpayments from individuals. PARIS officials from three states said it is not clear which state should assess and collect an overpayment when it is found that a client has been receiving TANF or Food Stamp benefits from two or more states. For example, it is not clear if the state where the client does not reside should assess an overpayment because, as a nonresident, the client was not eligible to receive benefits from the state or if the state where the client does reside should assess the overpayment because it is much more likely to be able to collect the overpayment. Also, it is not clear how to determine which state should assess an overpayment when the client claims two residences very near each other but in different states, and it is not known where the client actually lives. Although some coordination and communication problems are likely to occur in any project that involves multiple states and different federal agencies, the project’s lack of formal guidance and processes makes these problems more likely to occur. Currently, the formal guidance for the program only includes the file format the states need to provide for the match. However, it does not address matters such as the type of case information other states’ benefit workers should be able to get when they call the telephone number provided for a case. Also, the guidance does not have written definitions of some key terms, such as “active case,” or explanations of how the various data fields are to be used by states to investigate match hits. Further, the project has no guidance or protocols for coordinating the assessment and collection of overpayments. However, ACF, CMS, and FNS have not provided the management or administrative support—such as a formal focal point at the federal level—that would be needed to coordinate the project more effectively and help develop such guidance and protocols. In some states, management has given little or no attention to the PARIS interstate matches and has allowed match hits to go unresolved. This problem is more pronounced with Medicaid match hits because, in some states, they are given a lower priority than match hits involving TANF or Food Stamps. We found evidence that in at least three states that have participated in the PARIS project since August 1999, match hits for the entire state or for some densely populated areas were not being resolved. The PARIS coordinator in one state told us that match hits in his state have never been sent out to workers to be resolved. In a second state, a large metropolitan area had not received any match hits from its district office until shortly before our visit in February 2001. The PARIS coordinator in a third state told us that a large county sometimes ignored the PARIS match hits sent to it for resolution. The problem of not resolving match hits appears to be most pronounced in the Medicaid program. Information we received from DMDC indicates that some states may not be focusing sufficient attention on their Medicaid match hits. Because DMDC does not retain state data used for the PARIS matches, we were not able to determine how many match hits involving Medicaid are not resolved and thus recur each quarter. However, data provided by DMDC for the February 2001 PARIS matches show that some states have a relatively large percentage of match hits involving Medicaid. For example, if 40 percent of the records a state submitted for matching were for clients receiving benefits in a particular program, then one might reasonably expect to find that about 40 percent of the match hits involved that program. Thus, finding a disproportionately higher rate of match hits involving that program could suggest a possible problem. Such is the case with six states that have participated in PARIS since February 2000 or before. For each of the six states, the February 2001 PARIS match resulted in a proportionately higher percentage of match hits involving Medicaid than would generally be expected. For example, in one state, 60 percent of the records submitted for matching were cases involving only Medicaid benefits (not TANF or Food Stamps), but 78 percent of the resulting match hits were for such cases. In another state, 31 percent of the records submitted were for cases involving Medicaid received due to eligibility for SSI, but 69 percent of the resulting match hits were for such cases. Match hits involving duplicate Medicaid benefits frequently occur, not because of fraud or abuse, but because Medicaid beneficiaries often do not notify the state when they move out of state. Therefore, a state will keep beneficiaries on the rolls until it discovers that they have moved. The state may make this discovery during a routine reverification of eligibility, which is generally performed once a year or less often. However, officials from several states have told us that their states never reverify the eligibility of a certain type of Medicaid beneficiary, such as one who is eligible based on his or her receipt of SSI. Therefore, the PARIS matches often involve this type of beneficiary. Although a state receives notifications from SSA when SSI clients move out of the state, states often do not remove Medicaid beneficiaries from their rolls based on these notifications, according to an SSA official. The PARIS coordinators for two states told us this problem came to light after they examined their first PARIS interstate match results and found a startling number of match hits involving SSI recipients who were on the state’s Medicaid rolls. One state compared the Medicaid match hits from its first PARIS run with SSA files and found 5,000 SSI recipients on the state’s Medicaid rolls who, according to SSA records, were not residing in the state. This prompted the state to do a similar match with SSA records using all the state’s Medicaid beneficiaries. The state then followed up with letters to Medicaid enrollees who the matches indicated no longer lived in the state. As a result of the PARIS and subsequent SSA matches, the state identified 17,000 people on its Medicaid rolls who were no longer eligible for Medicaid in the state. We heard a similar story from another state. Both states, we were told, had been making monthly payments to MCOs for the Medicaid beneficiaries, who would have stayed on the states’ rolls indefinitely if the state had not participated in the PARIS matches. Yet even after receiving large numbers of Medicaid match hits, some states appear not to be resolving them or addressing the problems with their Medicaid rolls. We have been told by some PARIS coordinators that the departments administering Medicaid are focusing their efforts on getting people on the Medicaid rolls rather than removing people who are no longer eligible. PARIS officials in two states said that they believe the local benefit workers or the offices responsible for Medicaid have not adjusted their thinking to recognize the shift from a fee-for-service to a managed care environment. In the past, when Medicaid services were provided on a fee-for-service basis, costs were incurred only if beneficiaries sought medical treatment and providers submitted bills for the treatment. Therefore, if a beneficiary moved out of state but remained on the state’s Medicaid rolls, medical expenses were not incurred for the beneficiary if he or she did not seek treatment in the state. However, when the state makes a fixed monthly payment to an MCO for each Medicaid beneficiary, as is done under some managed care arrangements, the state makes payments to the MCO regardless of whether the beneficiary ever seeks medical treatment. The PARIS project was designed to identify duplicate benefits after they have been provided, not to prevent the duplicate benefits from occurring in the first place. Therefore, the PARIS matches are part of what has been described as a “pay and chase” process, in which states pay benefits to clients and then try to recover overpayments when they discover the clients were not eligible for the benefits. Preventing an improper payment in the first place is preferable to “pay and chase” because overpayments are often difficult to collect from low-income clients who no longer live in the state. Also, when states make payments to MCOs for beneficiaries who should no longer be on their Medicaid rolls, these funds are wasted unless they can be recouped. According to a Medicaid official, it may be difficult for states to recoup overpayments to MCOs caused by errors in states’ Medicaid rolls. Officials from most states we spoke with said they would like a data- sharing process that could be used before benefits are provided—that is, a process that would allow state caseworkers to check other states’ data to see if an applicant was already receiving benefits elsewhere before the state approved an application for benefits. Such a process would have to provide prompt responses (probably within 24 hours) to inquiries— something very different from the quarterly PARIS matches. One option for this process includes a national database of clients receiving public assistance in any state. Such a database would be maintained by the federal government and would consist of records submitted and regularly updated by the states. Implementing such an option would require federal leadership and funding to address programming and operating expenses and potentially the standardization of data and information systems among participating states. Also, while implementing this option could help prevent duplicate payments, it must be balanced against the additional privacy concerns that might arise. The PARIS project offers states a potentially powerful tool for improving the financial integrity of their TANF, Medicaid, and Food Stamp programs. However, the project has fallen short of realizing its full potential, as is most clearly evidenced by relatively low state participation. While PARIS’ success ultimately rests in the hands of the states, key federal players have not done enough to provide a formal structure to the project that encourages and facilitates state participation. More specifically, ACF, CMS, and FNS have not taken the lead in establishing a focal point in the federal government for coordinating the project. This is crucial given the complicated relationships among the three programs and among the federal, state, and local government entities responsible for implementing them. Additionally, the three federal agencies have not worked together to develop guidance and protocols that are key for helping states share information and best practices. Finally, these agencies have not formally recognized, nor devoted sufficient resources to, the project, despite its potential to identify improper payments and save program funds. Importantly, this lack of formal federal recognition might signal to some states that the project should not be taken seriously. To help states improve the effectiveness of PARIS and prevent duplicate benefit payments to TANF and Medicaid recipients, we recommend that the Secretary of HHS direct the Administrators of ACF and CMS to formally support PARIS and provide guidance to participating states. Such support and guidance should include the following actions: Create a focal point charged with helping states more effectively coordinate and communicate with one another. An existing entity, such as the Interagency Working Group, could provide the mechanism for such a focal point. This entity could also serve as a clearinghouse for sharing best practices information that all states could use to improve their procedures, such as comparisons of match filtering systems. Take the lead to help the PARIS states develop a more formal set of protocols or guidelines for coordinating their match follow-up activities and communicating with one another. Develop a plan to reach out to nonparticipating states and encourage them to become involved in PARIS. At a minimum, all states should be encouraged to provide their TANF and Medicaid recipient data for other states to match, even if they choose not to fully participate in PARIS. This would help to ensure that all recipients nationally are included in PARIS matches. Coordinate with the USDA/FNS Food Stamp program to encourage their participation in PARIS at the federal level as well as their working more closely with individual states to improve the effectiveness of PARIS and helping more states to participate. Officials from the Department of Health and Human Services and the Food and Nutrition Service provided comments on our report, the full text of which appear in appendixes II and III, respectively. The agencies also included some technical comments, which we have incorporated where appropriate. In general, HHS agreed with the overall intent of our recommendations, but consistently stressed the need for additional funding and staff resources to increase their PARIS activities. With regard to our first recommendation, HHS commented that it had created a PARIS work group composed of representatives from ACF and DMDC and has encouraged other agencies, such as CMS and FNS, to participate more actively in PARIS. HHS also stated that additional funding and staff resources from all involved agencies could help the work group to improve its services. We believe that while the PARIS workgroup provides useful guidance to participating states, to date it has been unable to resolve the problems and limitations we identified during our review. As we note in the report, this is due in part to ACF, CMS, and FNS not providing the management or administrative support necessary to correct these problems. Our recommendation is intended to encourage greater leadership by ACF, CMS, and FNS and a more coordinated proactive approach among the agencies to working together and with the states to address the limitations in PARIS. With regard to our second recommendation, HHS cautioned that it is not appropriate for a federal agency to dictate or appear to dictate the protocol states use in their interaction with other states. HHS also argued that states are best able to determine the necessary procedures for PARIS. However, HHS acknowledged that with additional resources, ACF could help states develop such procedures and disseminate them to other states as necessary. We continue to believe that active federal leadership is needed to solve the communication and coordination problems discussed in the report. Consequently, we believe that ACF should act as a facilitator at the federal level to help states overcome some of the challenges they have reported communicating and coordinating with one another. Moreover, such facilitation can and should occur without impinging on the states’ ability to administer the TANF, Medicaid, and Food Stamp programs in a manner that best fits their needs. With respect to our third recommendation, HHS generally agreed that ACF could do a better job to reach out to additional states to persuade them to participate in PARIS. However, HHS did not agree with our statement that states could, at a minimum, provide their data for others to use, even if they do not directly participate in PARIS themselves. We believe that while full participation by all the states is clearly the preferred outcome, the inclusion of nonparticipating states’ public assistance data for use by states participating in PARIS could help save additional benefit funds in the TANF, Medicaid, and Food Stamp programs. Finally, with regard to our fourth recommendation, HHS noted that ACF has consistently coordinated with FNS in all PARIS activities, but agrees that a closer working relationship with FNS would add to the effectiveness of PARIS. We concur with HHS’ assessment that ACF and FNS should work more closely together to improve existing PARIS operations and persuade additional states to participate. FNS noted that the report is a balanced and fair description of the PARIS project, but they expressed a concern that certain passages in the report suggest that FNS should have a more formal role in PARIS, despite the fact that PARIS is primarily an ACF project. They also identified several reasons why PARIS is not used more by the Food Stamp program. They emphasized that FNS is not required by statute to track interstate receipt of Food Stamp benefits and that many states are already engaging in such activity on their own. Although we recognize that FNS is not the lead agency responsible for PARIS, we do believe that FNS could take a more proactive stance to help coordinate the program at the federal level and persuade additional states to participate in PARIS. Moreover, we believe that although FNS is not mandated by statute to participate in PARIS, the benefits of PARIS in terms of potential program savings and enhanced program integrity warrant a more active role for the agency. Our analysis suggests that federal leadership from each of the involved federal agencies is critical to the success of PARIS, particularly with regard to expanding the number of states that participate in the project. In addition, while some states engage in interstate matching as noted in the report, we believe a more structured, far-reaching approach like that offered by PARIS is more effective. FNS also commented that PARIS cannot prevent the initial duplicate payment of benefits and that the matching activity may not be cost- effective. We believe that although PARIS cannot prevent duplicate benefits from being provided when states initially determine individuals’ eligibility for benefits, using PARIS is preferable to not matching at all. Finally, the report notes that matching for the Food Stamp program alone may not be cost-effective and emphasizes the advantage of matching for multiple programs simultaneously. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Chairman, Senate Committee on Governmental Affairs; Secretary of HHS; Administrator for CMS; Administrator for FNS; and to other interested parties. Copies will also be made available to those who request them. Please contact me or Kay Brown at (202) 512-7215 if you have any questions concerning this report or need additional information. Jeremy Cox, Kathleen Peyman, James Wright, and Jill Yost made key contributions to this report. In the table below, we describe the data and assumptions used to support our discussion on pages 12-15. Our analysis incorporated data from five states (Kansas, Maryland, New York, Pennsylvania, and Texas), two federal agencies (Centers for Medicare & Medicaid Services and Food and Nutrition Service), and selected research studies. S= (A x F + B x I) x X S = Savings per benefit case avoided, A = Proportion of match hits in which entire case is closed, F = Family (case) monthly benefit, B = Proportion of match hits where household members are removed from the case, I = Monthly benefit for that number of individuals, and X = Months of future benefit payments avoided. This calculation was performed for each of the three programs (TANF, Food Stamps, and Medicaid) separately to demonstrate how the cost- effectiveness of a “good” PARIS match hit could change depending on the number of programs that are included in a match. C = (N x W) + L C = Costs the state incurs for each case sent to field office staff for followup, N = Number of hours required to work an average case, W = Average hourly wage of individuals following up on match hits, and L = Average cost per case that the state incurs to create the automated list of recipients each time it participates in the PARIS matches. R = S / C R = Ratio of savings to costs, S = Savings, and C = Costs. The savings that a state might experience from participating in PARIS could differ from those we have reported in the report, depending on which assumptions are used. Tables 6 and 7 illustrate the possible savings a state could realize if we use the averages reported by each state instead of the more conservative assumptions cited in the report. The assumptions used in these tables, where they vary from the values used in the report, are as follows: Number of benefit months avoided per valid match hit: 7 months; Valid hit rate: 30 percent; Each match hit requires 60 minutes (1 hour) to resolve; Cost to follow up on 1 match hit: $31.77; and Cost to follow up on all 100 match hits: $31.77 x 100 hits = $3,177 + $440 (cost of creating file of recipients for matching each time the PARIS match is performed) = $3,617 total cost. Using our hypothetical example in which 100 match hits are sent to local benefit office staff for follow up, table 6 illustrates how the savings to a state from participating in one PARIS match could vary depending on the number of programs included in the match and differences in the valid hit rate. The table assumes that the savings for each program accrue for 7 months. If 30 percent of the match hits are valid (they accurately identify 30 instances of duplicate benefits being paid) and the individuals identified are enrolled in all three programs, the match would produce gross savings of more than $146,000 yielding a savings-to-cost ratio of about 40 to 1. After factoring in total costs of $3,617 to participate and follow up on the match hits, the net savings are more than $142,000. A valid hit rate of 10 percent—a rate substantially below what participating states have reported—in which the match only includes the Food Stamp program would still result in gross savings of about $9,500 (a savings-to-cost ratio of almost 3 to 1). The number of months that future benefit payments are avoided can also influence the amount of savings that result from a PARIS match. Table 7 illustrates the variation in program savings that could result depending on the number of months of future benefits that are avoided and the number of programs matched, given a 30 percent valid hit rate (30 match hits out of the 100 match hits sent for follow up result in some savings). As the table shows, the state would experience net savings from participating in PARIS under each scenario, although the range of potential savings varies considerably. Three months’ worth of Food Stamp benefits avoided would yield gross savings of more than $12,000, (a savings-to-cost ratio of about 3 to 1). The net savings would be about $8,600. However, the match could produce gross savings of about $313,000 if 15 months of benefits were avoided and the match was performed for all three programs (a savings-to-cost ratio of about 87 to 1). Net savings would be about $309,600. The following is GAO’s comment on the Department of Health and Human Services’ letter dated August 17, 2001. The HHS comment concerning “the stated need for a real-time, GAO on- line system” is inaccurate. Although we discuss a national database as one option for providing prompt responses to interstate inquiries about public assistance applicants’ eligibility for benefits, the report does not state that we or any other agency should develop or operate such a system.
Public assistance programs make millions of dollars in improper payments every year. Some of these improper payments occur because state and local agencies that run the programs lack adequate, timely information to determine recipients' eligibility for assistance. This inability to share information can result in both federal and state tax dollars being needlessly spent on benefits for the same individuals and families in more than one state. In 1997, the Department of Health and Human Services (HHS) began a project to help states share eligibility information with one another. The public assistance reporting information system (PARIS) interstate match helps states share information on public assistance programs, such as Temporary Assistance for Needy Families (TANF) and Food Stamps, to identify individuals or families who may be receiving benefit payments in more than one state simultaneously. Officials in almost all of the 16 states and the District of Columbia that participated in PARIS said that the project has helped identify improper TANF, Medicaid, or Food Stamp payments. Despite its successes, the project has several limitations. First, the opportunity to detect improper duplicate payments is not as great as it could be because only one-third of the states participate. Second, participating states do not have adequate protocols or guidelines to facilitate critical interstate communication. As a result, some states have reported critical problems, such as difficulty determining whether an individual identified in a match is actually receiving benefits in another state. Third, state administrators for the TANF, Medicaid, and Food Stamp programs have not always placed adequate priority on using PARIS matches to identify recipients who are living in other states. As a result, individuals may continue to receive or have benefits paid on their behalf in more than one state even after they were identified through the matching process. Finally, because the PARIS match is only designed to identify people after they are already on the rolls, it does not enable the states to prevent improper payments from being made in the first place.
Retail payments are relatively high-volume, low-value payments. Retail payment methods include cash, checks, debit and credit card, and ACHtransactions. While depository institutions provide cash processing services to retailers and other depository institutions, the Federal Reserve provides cash processing services only to depository institutions and the U.S. government. The Federal Reserve and correspondent banks provide check collection and settlement services. The Federal Reserve and private electronic network operators provide clearance and settlement services for ACH transactions. Private electronic network operators also provide clearance and settlement services for debit card, credit card, and automated teller machine (ATM) transactions. For consumers and retailers, cash transactions are settled instantaneously. However, checks require a more complex settlement process and more time to settle. Depository institutions have several alternative methods for clearing and settling checks. Figure 1 illustrates an example of how a check is settled through direct presentment—when depositary banks present checks directly to the paying bank. In practice, local checks generally settle in 1 business day and nonlocal checks generally settle in 1 to 2 business days. Currently, checks are settled on business days, which do not include weekends, resulting in a delay in the settlement of those checks deposited during the latter part of the week. Credit card and debit card payments also require a complex system to clear and settle transactions. A credit transaction is initiated when a customer’s card number is entered into a card reader, followed by the transaction amount. The data are transmitted to the card-issuing bank. The card-issuing bank accepts or denies the transaction. If the transaction is authorized, the customer signs to accept liability for the transaction. In the case of a debit card, the customer enters a personal identification number or sign the sales receipt to accept liability for the transaction. At the end of that day, the retailer submits the customer’s transactions along with all of the other credit card transactions to its depository institution (retailer’s bank), which credits the retailer usually in 1 to 2 business days. The credit card company is then responsible for creating the net settlement positions that result in the transfer of funds from the card-issuing bank to the retailer’s bank. These transfers typically occur by Fedwire funds tranfers through a correspondent bank. The card-issuing bank would then bill the customer. When the customer pays the bill, the cycle is complete, as shown in figure 2. Debit card transactions are authorized and cleared in a similar fashion to credit cards, except that they settle by debiting customers’ accounts and crediting retailers’ accounts on the next business day. Depository institutions use a variety of channels to settle credit and debit card transactions, including accounts at a common correspondent, ACH networks, and Fedwire. Final settlement of ACH transfers processed by the Federal Reserve occurs through debits and credits to the accounts of depository institutions on the books of the Federal Reserve. ACH transfers that are processed only by private-sector ACH operators are net settled through the Federal Reserve. Float is created because of the time it takes to clear and settle payments, which affects retailers, consumers, and depository institutions. Float is generally defined as the lag between the receipt of a check or other payment and the settlement of that payment. This lag differs according to the method of payment. There is no float for cash. Checks are subject to the longest float, primarily due to the need to physically transport checks. Federal law and regulations prohibit depository institutions from paying interest on demand deposits consisting primarily of commercial checking accounts. Many depository institutions, however, offer “sweep” accounts to business customers as a mechanism by which these customers obtain interest earnings on account balances above negotiated account minimums. During the business week, the depository institution transfers, or “sweeps,” commercial checking account balances above an agreed minimum into other accounts on which interest might be paid, such as MMDAs, or into interest-earning nondeposit financial instruments. Depository institutions typically invest these funds in short- term, low-risk assets, such as U.S. Treasury bills and notes, or money market mutual funds, among others. Depository institutions do not pay interest earnings on funds deposited on the weekend because they are unable to invest these funds until the next business day. Overall, U.S. settlement schedules are similar to settlement schedules in most G-10 countries. In some Asian countries, settlement services are available for a limited number of hours on Saturdays. Specifically, in Singapore and Hong Kong settlement occurs Saturday mornings, in addition to Monday through Friday services. Recently, South Korea ended its Saturday settlement hours because many commercial banks are closed on Saturdays. Appendix II further illustrates international payment systems’ operating hours. Weekend settlement of financial transactions would provide small benefits for retailers and consumers, and little, if any, benefit for the economy as a whole. Retail industry representatives identified weekend interest earnings as the main potential benefit for retailers. However, our analysis of grocery industry data indicated that the grocery industry currently forgoes small potential interest income on its weekend sales relative to the grocery industry’s annual sales or the national economy. If weekend settlement were adopted, retailers also could realize some secondary benefits such as reducing the amount of cash held in stores. However, other secondary benefits such as accelerated settlement represent benefit transfers primarily from consumers or banks to retailers, creating no economic stimulus to the economy. Although retail industry representatives identified weekend interest earnings as the main benefit for retailers, our analysis suggested that investing retailers’ balances in sweep accounts and other investment vehicles over the weekend would provide minimal additional earnings to retailers and have virtually no impact relative to the economy as a whole. According to grocery industry publications, the industry had sales of $494 billion in year 2000, as seen in figure 3. Therefore, because depository institutions are unable to invest retailers’ weekend cash deposits until Monday, we estimated, based on grocery industry data, that the grocery industry forgoes approximately $2.6 million each year in after-tax interest earnings, assuming a 2 percent interest rate, and $6.6 million a year, assuming an average 5 percent interest rate. The corresponding forgone after-tax interest earnings for check transactions are $2.8 million and $7 million annually, at 2 percent and 5 percent interest rates, respectively. Finally, for credit and debit cards, the forgone interest earnings are $2.5 million and $6.4 million, at 2 percent and 5 percent interest rates, respectively. Table 1 illustrates the estimates. Appendix III provides details of our estimates. Applying the same assumptions to the entire retail sector, which had sales of $3.4 trillion in the year 2000, we estimated that the annual forgone after-tax interest earnings for that sector would be $53.9 million at a 2 percent interest rate and $134 million at a 5 percent interest rate. However, these benefits might be reduced if depository institutions raised their fees, in a way that passed costs to retailers, to cover the increased costs associated with weekend operations. Weekend settlement could provide a number of secondary benefits to retailers and consumers. For example, retailers noted that weekend settlement would provide secondary benefits, such as reduced amounts of cash in stores, thereby reducing potential losses due to theft and lower insurance costs. The accelerated settlement of transactions would also benefit retailers by lowering accounts receivable balances, as noncash payments owed to retailers settle faster. Grocery officials stated that cash represents a large risk for store employees, and therefore, on a daily basis, grocers tend not to maintain large amounts of cash in stores. Excess sums generally are sent to depository institutions, usually via armored car services. Under the current system, depository institutions are not open to accept retailers’ deposits on Saturday and Sunday evenings; therefore, deposits are generally maintained in store vaults or in bonded safes at the armored car services’ facilities, which is a cost to retailers. Banking industry representatives stated that retailers currently must pay to insure large amounts of cash over the weekend. Finally, accelerated settlement of transactions could also benefit consumers if they are the recipients of check payments by making funds available sooner, assuming that the EFAA and the Federal Reserve Board’s implementing regulations were amended to include weekends in the definition of “business day.” The adoption of weekend settlement would transfer float income among retailers, consumers, and depository institutions rather than create new earnings. For example, retailers would earn interest income previously earned by check and debit card users, but no new interest income or wealth would be created. For credit cards, retailers potentially would obtain faster funds availability, but because card issuing depository institutions offer deferred payment to customers, retailers might earn additional interest income at the expense of depository institutions. For checks and debit cards, if retailers and consumers both had access to interest-bearing accounts or services provided by depository institutions, weekend settlement would move money more quickly out of customers’ accounts and into retail sector accounts. On the other hand, retail industry representatives also stated that a disadvantage of weekend settlement would be that checks they had written would clear faster, thereby reducing interest currently earned on those funds. Faster funds availability from weekend settlement also could present drawbacks for consumers. Consumer advocates stated that consumers might face increased overdrafts if they did not adjust to the accelerated debiting of their checks. Weekend settlement would negatively affect those people who depend on check float to avoid account overdrafts. For example, consumers might write a check on a Friday afternoon for an amount greater than their account balance, knowing that on the following Monday their paycheck would be credited to the account and cover the amount of the check. Further, concerning the economy as a whole, the interest payments that retailers would receive from weekend settlement reflect transfers within the economy rather than the creation of income. For example, additional income that retailers might earn from weekend settlement of checks written or debit card transactions received from customers would be offset by corresponding losses of interest by check writers, debit card users, and depository institutions. Consumers with interest-bearing checking accounts would lose the interest they would have earned on checks written on Friday evenings and Saturdays. If checks were drawn on noninterest-bearing accounts, then depository institutions would lose funds on which they were not paying interest. Corporate treasurers of retail businesses noted that although they potentially could accumulate interest earnings if weekend settlement were adopted and interest-earning accounts were available, depository institutions might pass along some or all of the costs of operating on weekends to retailers and consumers in the form of higher fees, thus lessening the gains to retailers. Depository institutions stated that to pay interest, they would need to have access to short-term investment markets on weekends. Our analysis showed that weekend settlement would be unlikely to provide any stimulus to economic growth. Its only impact would be to make funds available on weekends that would otherwise be available on Monday. Because payment system actors and processes are interdependent, weekend settlement would require payment service providers that clear and settle retail and wholesale payments to open on weekends, resulting in increased capital and operational costs. The greatest concern that payment service providers expressed to us was the cost of additional computer system and staffing resources needed to mitigate the increased risk of operational failures that weekend settlement would present. Although they could not provide exact cost figures for the additional resources they would need, payment service providers stated that costs would be significant and potentially prohibitive for small depository institutions. According to payment service providers, these operational costs would exceed any potential benefits that weekend settlement could create, and likely would reduce productivity in the payment system. Moreover, they stated that alternatives to weekend settlement with lower operational costs currently exist, and that efforts in other areas are under way to increase payment system efficiency. Because the payment system consists of interdependent processes and relationships among payment system actors, weekend settlement would require many payment service providers to open on weekends. For example, private and Federal Reserve cash and check processing centers and check transportation networks would have to be fully operational on weekends so that cash and check transactions could be cleared.However, banking industry representatives pointed out that not all depository institutions’ branches would need to open. National and regional clearing organizations also would need to open so that transactions among depository institutions could be cleared, netted, and settled. According to depository institution officials we spoke with, both Federal Reserve and private ACH networks would have to open on weekends to facilitate settlement of check and debit card transactions. Similarly, private electronic network providers told us that Fedwire would need to open on weekends to facilitate final settlement of credit card transactions. In addition, depository institutions also stated that once retailers’ transactions are settled and their accounts are credited for weekend deposits, they would need government securities and money markets to open on weekends to invest these deposits and pay interest on excess sweep account balances. Federal Reserve and investment market officials told us that Fedwire and clearing organizations for investment markets also would need to open to clear and settle these transactions. Payment service providers we met with generally viewed weekend downtime for computer systems as critical to the smooth provision of clearance and settlement services during the business week. Payment service providers stated that most computer systems are used to test, upgrade, and maintain computer-system activities on weekends when production activities are limited. These ongoing weekend activities reduce the potential for operational failures during the business week. Business continuity testing of computer systems remains a high priority for financial markets after the terrorist attacks of September 11, 2001. These tests generally take place on weekends and sometimes take more than 1 day to complete. Depository institution and clearing organization officials also said that weekend downtime is important for resolving problems that occur when implementing new software applications or upgrades to existing applications. Like other payment service providers, Federal Reserve officials told us that the Federal Reserve uses weekends to maintain and test its payment service applications and its internal accounting system that are used to settle payments. Most payment service providers told us that because tests, upgrades, and maintenance would have to continue if weekend settlement were adopted, they would need additional computer hardware and software to simultaneously perform weekend settlement and regular weekend activities, thereby increasing capital costs. One private electronic payment network that moved from a 5 day production schedule to a 7 day production schedule for transaction processing characterized its costs as substantial. The network had to purchase additional hardware to double computer system capacity so that it could maintain complete redundancy in production, contingency, and testing activities 7 days a week. Representatives for the network said that their case could be an example of the potential hardware resources that other payment service providers could face if weekend settlement were adopted. Officials at a large depository institution and investment market representatives pointed out that payment service providers would have to modify each line of relevant software code to reflect Saturdays and Sundays as valid settlement dates. The depository institution officials said that their retail banking operations would require code changes for the institution’s 80 software applications—with estimated costs in the millions of dollars. Clearing organizations also identified the need for additional software to carry out settlement on weekends and estimated that software costs would exceed potential hardware costs. Additional staff needed to carry out weekend settlement also would increase operational costs for payment service providers. Some payment service providers estimated that staffing costs for weekend settlement could increase current operating budgets by up to 40 percent. For instance, depository institutions would require additional staff in departments that currently are not open on weekends, such as staff to handle check presentments and return checks and staff to manage their general ledgers and Federal Reserve accounts. Banking industry representatives noted that small depository institutions that perform their own clearing and processing activities would need to hire additional staff to prepare cash and checks for weekend shipment to local Federal Reserve Banks and branches. This could be particularly costly for small depository institutions because “back-office” staffs often consist of one person. Similarly, Federal Reserve officials said that additional staff would be needed for check processing, ACH, Fedwire, internal accounting, credit and risk management, and information technology operations, as well as other support functions. Investment market representatives commented that the human capital costs of weekend operations would be high because firms likely would have to pay senior staff at premium rates to work on weekends. Investment market representatives said that operating on weekends would decrease market efficiency and that they generally expected that weekend markets would be inefficient and illiquid because weekend trading activity likely would be low. Investment market representatives pointed out that liquidity in weekend markets would be generated only if there were sufficient numbers of investment firms interested in obtaining retailers’ excess sweep account balances. They noted that investment firms have many other options for short-term investment beyond government securities and money market instruments—typical investment vehicles used by depository institutions for sweep account funds. For these reasons, they noted that they sometimes recommend closing markets early before holidays, such as Good Friday, if trading volumes are low, to preserve market efficiency and create greater liquidity. In general, they did not expect weekend investment market liquidity to offset the potential operating costs. Similarly, depository institution officials and clearing organizations anticipated that weekend settlement would result in the spreading out current 5-day transaction volume over 7 days of operations, thereby decreasing system efficiency. Two proposed variations of 7 day settlement are currently viewed as not practical and too costly. The first variation involves a 6 day settlement schedule, where settlement would occur Monday through Saturday. Payment service providers told us that a 6 day settlement schedule would present lower operational risk and costs relative to a 7 day settlement schedule, but currently would be complicated and too expensive. Some payment providers noted that once technology more generally allows faster computer processing, 6-day settlement could be possible because it would provide 1 day during which computer system maintenance could be performed. The second variation relates to selective processing of transactions by payment method—for example, weekend processing of cash or check transactions. However, according to banking industry representatives, processing and settlement by payment method would be impractical because depository institutions’ demand deposit accounting systems, which debit and credit customer accounts, do not differentiate transactions by payment methods. Rather, representatives from depository institutions stated that large batches of transactions are queued for debiting from and crediting to customer accounts regardless of the payment method associated with the transaction. Federal Reserve officials also said that selective settlement also would require the supporting settlement infrastructure to open on weekends, such as Fedwire funds transfer and securities transfer services. Therefore, such an approach would not lower operational costs of settling transactions on weekends. We identified current banking services that provide business customers with some of the advantages of weekend settlement but do not require payment service providers to incur costs of weekend operations. For example, officials at a large depository institution said that one of its large business customers requested a service whereby, on Mondays, the depository institution provides backdated interest on funds that the customer deposits on Mondays, as if the funds had been deposited and credited to the customer’s account over the weekend. Officials from the depository institution noted that this service is no different than other services in that it is provided to the customer for a fee. Banking industry representatives told us that some depository institutions offer “fully analyzed” accounts, whereby they calculate the average daily account balances of commercial customers and determine daily interest earnings credit based on those figures. They noted that fully analyzed accounts allow account fees and charges to be offset by earnings credit based on the average daily collected account balance. Depository institution officials and banking industry representatives said that these services are offered within the context of existing relationships with commercial customers. They generally viewed these services as alternative methods of providing weekend interest earnings for business customers that do not require other payment service providers to be in operation on weekends. According to Federal Reserve and clearing organization officials, ongoing efforts to increase payment system efficiency relate to extending Fedwire’s hours of operation during the business week to correspond with activity in Asian markets. Although extending Fedwire’s hours of operation would not provide retailers with weekend interest earnings, it could increase efficiency in the payment system by allowing firms to consolidate risk management resources. Payment service providers generally expected that in the short term, increased efficiency in the payment system would come from converting traditionally paper-based payment methods into electronic form. Some payment service providers said that check truncation—the conversion of a paper check into an electronic equivalent that is transmitted in place of the original check— would eliminate the float that transporting checks creates in the check collection process. Officials at one clearing organization said that weekend settlement costs could be lowered if there were an increase in electronic payment instruments and a corresponding decline in paper- based payment instruments within the payment system. Our legal research found no federal law that would specifically prohibit banks, clearing organizations, or other entities from engaging in weekend settlement operations. Some states, however, have laws prohibiting banks from doing business on Sundays or state holidays. OCC has determined that state bank closure laws do not apply to national banks. National banks, therefore, would not be prohibited from engaging in weekend settlement operations. However, in states prohibiting banks from operating on certain days, state-chartered banks would be precluded from conducting such operations on those days unless the closure laws were preempted. The federal financial institution regulators do not specifically regulate the hours of operation of state-chartered institutions, so state closure laws generally have not been preempted with respect to such institutions. We express no opinion regarding the extent to which the Federal Reserve, pursuant to its authority over the payment system, could preempt state closure laws in order to provide for weekend settlement services. It appears that Congress could choose to preempt such laws through legislation. Because they have not been preempted, state closure laws applicable to state banks could interfere with the development of a uniform national weekend settlements system. Our research indicates, however, that a relatively small number of states have Sunday closure laws. In addition to state closure laws, development of a weekend settlement system involves other legal considerations. For example, under the EFAA and the Federal Reserve’s Regulation CC, deposited funds must be made available and checks must be returned within time periods based on “business days” and “banking days.” The term “business day” is defined as a calendar day other than a legal holiday, Saturday, or Sunday; a “banking day” is a business day on which a bank office conducts substantially all of its banking operations. Even if banks were to conduct settlement operations over the weekend, such operations would not necessarily result in corresponding funds availability and check returns because weekends are not counted as business days. Moreover, the settlement process could be complicated by a lack of uniformity and predictability in bank operations that might exist if banks were to conduct their clearing and settlement operations using different timetables. Other legal considerations include the impact of wage and hour laws and matters of safety and soundness. To the extent bank employees involved in settlement operations are subject to federal and state wage and hour laws, institutions would have to ensure that weekend operations did not run afoul of provisions requiring, among other things, the payment of overtime for work in excess of 40 hours per week. Concerning safety and soundness issues, banks would have to ensure that utilizing computer systems and other resources on weekends would not compromise their ability to maintain and update financial and security systems. We received technical comments and corrections on a draft of this report from Treasury and the Federal Reserve that we incorporated, as appropriate. In addition, the Federal Reserve provided written comments in which it agreed that the potential costs of weekend settlement would outweigh the associated benefits. These comments are reprinted in appendix IV. As agreed with your office, we plan no further distribution of this report until 30 days from its issue date unless you publicly release its contents sooner. We will then send copies of this report to the Chairman of the Committee on Financial Services, House of Representatives; the Ranking Minority Member of the Committee on Financial Services, House of Representatives; the Ranking Minority Member of the Subcommittee on Financial Institutions and Consumer Credit, House of Representatives; the Secretary of the Department of the Treasury; and the Chairman of the Board of Governors of the Federal Reserve System. We will make copies available to others on request. In addition, this report is also available on GAO’s Web site at no charge at http://www.gao.gov. Please contact me or Barbara Keller, Assistant Director, at (202) 512-8678 if you or your staff have any questions concerning this letter. To determine the potential benefits of weekend settlement, we interviewed and requested data from consumers groups, retail representatives, and payment service providers. We focused our study on the clearance and settlement of retail payments made by cash (which requires no clearing and settles instantaneously), checks, debit, and credit cards. We spoke with consumer advocacy groups to obtain the consumer perspective, and we interviewed representatives from industries within the retail sector, including grocery industry and home improvement industry representatives. We focused on the grocery industry because the majority of sales take place on weekends and primarily involve cash, check, and debit card transactions. To estimate the forgone interest earnings of the grocery industry, we obtained studies from third-party sources, some of which were conducted by industry participants. We have not assessed the quality of the research methodologies used in these studies. We used calendar year 2000 grocery industry sales data from the Progressive Grocer (PG), an industry publication. This was the latest year for which complete information was available. We also obtained grocery industry sales data from the U.S. Census Bureau for comparison purposes. We used the results of a payments study, performed by a large electronic network provider, that tracked the purchasing behavior of 20,000 consumers to determine what percentage of grocery sales are made with cash, check, and debit cards, respectively. We also used the results of a consumer survey on when consumers shop during the week, published in the Progressive Grocer 2001 Annual Report and short-term interest rate data from the Board of Governors of the Federal Reserve System. Appendix III provides details on our calculation of the forgone interest earnings. We spoke with payment system providers to gain their perspectives on the potential costs and operational issues involving weekend settlement. We interviewed officials from several depository institutions, banking and bond market industry representatives, clearing organizations, and private electronic payment network operators. We interviewed officials from various components of the Federal Reserve involved in the provision of payment system services including cash and check processing, check transportation, ACH services, wholesale payments services, and open market operations. We also spoke with senior staff from the Department of the Treasury about the potential implications of weekend settlement on the Treasury securities market. We obtained information from corporate treasury representatives on the perceived advantages and disadvantages of weekend settlement. Finally, to analyze and compare U.S. settlement schedules, we obtained information from representatives of central banks in selected foreign countries on the operating schedules of their respective settlement systems. We also obtained information from central banks’ Web sites. We focused our analysis of international settlement schedules on countries with relatively modern, industrialized economies in selected geographic areas—specifically, Asia, Europe, North America, Australia, and New Zealand, as depicted in appendix II. We based our analysis of potential legal considerations involving weekend settlement on research of relevant federal and state statutes, regulations, judicial decisions, and other legal databases, and conducted interviews with banking agency attorneys and representatives of payment system providers. We conducted our work in Washington, D.C., Atlanta, Georgia, and New York, New York, between February and September 2002 in accordance with generally accepted government auditing standards. Wholesale System Fedwire CHIPS Bank of Japan – Network (BOJ – NET) Large Value Transfer System (LVTS) Trans-European Automated Real-time Gross settlement Express Transfer (TARGET) Society for Worldwide Interbank Financial Telecommunications – Payment Delivery System (SWIFT – PDS) Exchange Settlement Account System (ESAS) Mon – Fri Sat Mon – Fri Sat Mon – Fri 9:00 am –7:00 pm (GMT +12) 8:00 pm –8:40 am (+1 day) Closed 12:01 am Sat – 11:59 pm Sun 9:00 am – 5:30 pm (GMT +8) 9:00 am – 12:00 pm 9:00 am – 6:30 pm (GMT +8) 9:00 am – 2:45 pm 9:30 am – 4:30 pm (GMT +9) All countries have real-time gross settlement systems except Canada, which has a net-settlement system. The following European Union countries participate in TARGET: Austria, Belgium, Finland, France, Germany, Ireland, Italy, Luxembourg, The Netherlands, Portugal, and Spain. Grocery industry representatives said that because the settlement system is not open on weekends, retailers are losing money because the funds they receive on Friday evening, Saturday, and Sunday cannot be credited to their accounts and earn interest before Monday, at the earliest. These calculations estimate the amount of interest forgone to the grocery industry for retail transactions made by cash, checks, and credit and debit cards. This calculation measures the upper bound of forgone earnings. It assumes that every store would deposit every dollar and every check at the bank on the day that it is received. It also does not take into account that payments made by retailers, under weekend settlement, would be debited from their accounts earlier, thereby potentially decreasing their interest earnings. To measure forgone interest earnings, we used the federal funds rate, the rate at which a bank with excess reserves at a Federal Reserve district bank charges other banks that need overnight funds. This is an appropriate measure because the excess grocery funds would be invested in a similar, short-term fashion. The average federal funds rate from January 1, 1998, to January 1, 2002, was approximately 5 percent. The current federal funds rate is approximately 2 percent. We estimated forgone earnings at both of these rates. To gain credit for deposits for a given day, retailers must have deposits collected by approximately 2:00 p.m. on that day. Grocery industry representatives stated that most Friday sales tend to occur after that hour; therefore, we assumed that all proceeds from Friday, Saturday, and Sunday do not get deposited until Monday. However, Sunday proceeds deposited on Monday generally would be processed as quickly as money deposited Monday through Thursday; therefore, we did not include Sunday proceeds as idle balances. In addition to those named above, Tonita W. Gillich, Marc Molino, Robert Pollard, Carl Ramirez, Barbara Roesmann, Nicholas Satriano, Paul Thompson, and John Treanor made key contributions to this report. U.S. General Accounting Office, Payment Systems: Central Bank Roles Vary, but Goals Are the Same, GAO-02-303 (Washington, D.C.: (February 25, 2002). U.S. General Accounting Office, Check Relay: Controls in Place Comply With Federal Reserve Guidelines, GAO-02-19 (Washington, D.C.: December 12, 2001). U.S. General Accounting Office, Federal Reserve System: Mandated Report on Potential Conflicts of Interest, GAO-01-160 (Washington, D.C.: November 13, 2000). U.S. General Accounting Office, Retail Payments Issues: Experience With Electronic Check Presentment, GAO/GGD-98-145 (Washington, D.C.: July 14, 1998). U.S. General Accounting Office, Payments, Clearance, and Settlement: A Guide to the Systems, Risks, and Issues, GAO/GGD-97-73 (Washington, D.C.: June 20, 1997).
The U.S. payment system is a large and complex system of people, institutions, rules, and technologies that transfer monetary value and related information. The nation's payment system transfers an estimated $3 trillion dollars each day--nearly one third of the U.S. gross domestic product. Currently, settlement--the final step in the transfer of ownership involving the physical exchange of payment or securities--occurs only during the business week. Some retailers, however, generate approximately half their weekly sales on weekends--when depository and other financial institutions generally are closed--receiving cash, checks, and electronic payments that are not credited to their accounts until at least the next business day. Weekend settlement of financial transactions would provide small benefits to retailers and consumers, and little, if any, benefit to the economy as a whole. Because payment system actors and processes are interdependent, implementing weekend settlement would require payment service providers that clear and settle retail and wholesale payments to open on weekends, resulting in significantly increased operational costs. Although there are no direct federal prohibitions against weekend settlement, state laws that are not preempted by federal laws or regulations providing for weekend settlement could interfere with development of a uniform, national 7 day settlement system.
For more than 35 years, the federal government has implemented authorities—applicable to various demographic groups and some specific to Hispanics—calling for agencies to ensure equal opportunity in the federal workplace. EEOC and OPM or its predecessor agency, the Civil Service Commission (CSC), have leadership roles in furthering these authorities. Signed in 1969, Executive Order No. 11478, Equal Employment Opportunity in the Federal Government, stated that it is the policy of the U.S. government to provide equal opportunity in federal employment. Later, Congress passed the Equal Employment Opportunity Act of 1972, which extended to federal workers the protections of title VII of the Civil Rights Act of 1964 prohibiting employment discrimination on the basis of race, color, religion, gender, or national origin. This law requires each federal department and agency to prepare plans to maintain an affirmative program of equal employment opportunity and establish training and education programs. Pursuant to this and other authorities, EEOC establishes equal employment program standards, monitors federal agencies’ compliance with equal employment opportunity laws and procedures, and reviews and assesses the effectiveness of agencies’ equal employment programs. EEOC has carried out its responsibilities by issuing regulations and management directives providing guidance and standards to federal agencies for establishing and maintaining effective programs of equal employment opportunity. Recruitment should be from qualified individuals from appropriate sources in an endeavor to achieve a work force from all segments of society, and selection and advancement should be determined solely on the basis of relative ability, knowledge and skills, after fair and open competition which assures that all receive equal opportunity. The CSRA also created the Federal Equal Opportunity Recruitment Program (FEORP) to carry out the government’s policy to ensure equal employment opportunity. The act required OPM to evaluate and oversee agency programs and issue implementing regulations for the program. These regulations provide that recruitment processes prepare qualifiable applicants (those who have the potential but do not presently meet valid qualification requirements) for job openings through development programs. Programs specific to Hispanics include the 16-Point Program for Spanish- Speaking citizens, established in 1970, which outlined steps agencies should take to ensure equal opportunity in federal employment for Hispanics. In 1997, OPM implemented the 9-Point Plan calling for agencies to recruit greater numbers of qualified Hispanic Americans for federal service and improve their opportunities for management and senior executive positions. More recently, Executive Order No. 13171, Hispanic Employment in the Federal Government, signed in 2000, provides that agencies, among other actions, (1) develop recruiting plans for Hispanics and (2) assess and eliminate any systemic barriers to the effective recruitment and consideration of Hispanics. The order requires OPM to take the lead in promoting diversity to executive agencies and for the director of OPM to establish and chair an Interagency Task Force on Hispanic employment in the federal government to review best practices, provide advice, assess overall executive branch progress, and recommend further actions related to Hispanic representation. As an indicator to Congress and the President of the government’s progress toward ensuring equal employment opportunity, both EEOC and OPM, in their oversight roles, analyze and report on governmentwide and agency workforce data. The most recent data show that in September 2005, Hispanics constituted 7.4 percent of the permanent federal workforce while making up 12.6 percent of the CLF. While both EEOC and OPM report these data annually, neither agency has assessed on a governmentwide level the factors contributing to the differences in Hispanic representation between the two workforces. Citizenship and educational attainment had the most effect on the likelihood of Hispanics’ representation in the federal workforce, relative to the nonfederal workforce. Other measurable factors in our statistical model—gender, veteran’s status, race, English proficiency, age, disability status, school attendance (enrolled or not enrolled), employment status (full or part-time), and geography (state where employed)—had a more limited or almost no effect on the likelihood of Hispanics being in the federal workforce. When we analyzed the effect of all the factors simultaneously, we found that, among citizens, Hispanics were 24 percent or 1.24 times more likely than non-Hispanics to be employed in the federal workforce than in the nonfederal workforce. (See app. II for a detailed discussion of the steps we took to conduct our analyses and our results.) Our analysis showed that citizenship had the greatest effect of the factors we analyzed on Hispanics’ representation in the federal workforce. We analyzed the effect of citizenship before analyzing any other individual factor because of long-standing policy and practice to restrict federal government hiring to U.S. citizens and nationals—99.7 percent of federal executive branch employees were U.S. citizens or nationals in 2005. (See app. III for a discussion of the federal government’s policy and practice on the employment of citizens.) Before accounting for the effect of citizenship, Hispanics 18 and older were 30 percent less likely than non-Hispanics to be employed (i.e., represented) in the federal workforce, relative to the nonfederal workforce. However, when we analyzed the likelihood of only citizens 18 and older being employed in the federal workforce, we found that Hispanics were 5 percent less likely than non-Hispanics to be employed in the federal workforce compared to their representation in the nonfederal workforce. Our analysis of 2000 Census data showed that Hispanics had lower citizenship rates than other racial/ethnic groups, with the exception of Asians who had similar rates. In 2000, of those 18 and older in the combined federal and nonfederal CLF, 65 percent of the Hispanics were U.S. citizens compared with 95 percent of blacks, 96 percent of whites, 65 percent of Asians, 87 percent of Hawaiians/Pacific Islanders, and 96 percent of American Indians/Native Alaskans. Additionally, Hispanic immigrants have lower naturalization rates than other immigrant groups. According to the Pew Hispanic Center, 27 percent of the adult foreign-born Hispanic population in the United States were naturalized citizens in 2004 compared with 54 percent of the adult foreign-born non-Hispanic population. Hispanic-serving organizations have undertaken citizenship initiatives. For example, the League of United Latin American Citizens (LULAC) encourages legal residents of the United States to become citizens and reports that it conducts a national drive to have those eligible for citizenship apply for and attain citizenship. After citizenship, education had the largest effect on Hispanic representation in the federal workforce. We compared Hispanic and non- Hispanic citizens with similar levels of education. We limited our examination of the effect of education to citizens because citizenship is a basic qualification for most federal employment. As discussed above, among citizens, Hispanics were 5 percent less likely to be employed in the federal government. After accounting for education, Hispanic citizens were 1.16 times or 16 percent more likely than similarly educated non-Hispanic citizens to be in the federal workforce than the nonfederal workforce. The federal workforce contains a greater percentage of occupations that require higher levels of education than the CLF. EEOC divides occupations in the federal workforce and the CLF into nine categories, including among others professionals, operatives, and laborers. For example, in 2000, the year in which EEOC data on the CLF are based, occupations in the professional category—those occupations, such as lawyers, engineers, accountants, and registered nurses, requiring either college graduation or experience of such kind and amount as to provide a comparable background—constituted 29 percent of the federal workforce versus 18 percent of the CLF. Conversely, occupations in the operatives (semiskilled workers) and laborers (unskilled workers) categories, which generally do not require high education levels, constituted 3 percent of the federal workforce compared to 16 percent of the CLF. Figure 1 shows the composition of the federal workforce and the CLF by EEOC’s occupational categories. Our analyses showed that the likelihood of being a federal worker increased with higher levels of education. A person with some college was 1.7 times more likely to be a federal worker than a person with only a high school diploma, a person with a bachelor’s degree was 2.2 times more likely, and a person with more than a bachelor’s degree was 2.7 times more likely. OPM reported that in 2004, 42 percent of federal workers had a bachelor’s degree or higher. In addition, approximately 60 percent of new permanent hires to the federal government in 2005 had at least some college—20 percent with some college, 23 percent with a bachelor’s degree, and 17 percent with more than a bachelor’s degree. Our analysis of 2000 Census data showed that regardless of citizenship status, Hispanics overall have lower educational attainment than other groups, with non-U.S. citizens having the lowest levels of educational attainment. Among citizens in the CLF 18 and older, as table 1 shows, Hispanics had a higher percentage of those without a high school diploma—26.4 percent—and lower percentage of those with a bachelor’s degree or higher—15.4 percent—than most other racial/ethnic groups. When noncitizens were included, as table 2 below shows, the proportion of Hispanics with less than a high school diploma increased and the proportion having bachelor’s degree or higher decreased. Educational attainment for Hispanics 18 and older in the CLF who were not citizens was lower compared with those who were U.S. citizens. Table 3 shows that, among Hispanics in the CLF who were not U.S. citizens, 62.8 percent had less than a high school diploma while 6.2 percent had a bachelor’s degree or higher. In addition to having lower educational attainment levels than other racial/ethnic groups, there were differences in Hispanics’ educational patterns. For example, Hispanics have enrolled in 2-year colleges at a higher rate than other racial/ethnic groups. According to data reported in the American Council on Education’s Minorities in Higher Education, Twenty-First Annual Status Report, 2003–2004, 59 percent of Hispanics enrolled in postsecondary institutions are enrolled in community colleges, compared to 37 percent of whites, 43 percent of blacks, 41 percent of Asians, and 50 percent of American Indians. In addition, Hispanics are less likely than other groups to complete a bachelor’s degree. According to data from the National Center for Education Statistics’ National Educational Longitudinal Study beginning in 1988, by age 26, 47 percent of white students who had enrolled in postsecondary education had completed a bachelor’s degree compared to 23 percent of Hispanics—lower than other racial/ethnic groups. The federal government and Hispanic-serving organizations have implemented initiatives to address gaps in Hispanics’ educational achievement. In October 2001, Executive Order No. 13230 created the President’s Advisory Commission on Educational Excellence for Hispanic Americans, within the U.S. Department of Education, to examine issues related to the achievement gap between Hispanic Americans and their peers. The commission issued an interim report in September 2002, The Road to a College Diploma: The Complex Reality of Raising Educational Achievement for Hispanics in the United States, and a final report in March 2003, From Risk to Opportunity: Fulfilling the Educational Needs of Hispanic Americans in the 21st Century. The commission’s final report, concluding its work, contained six recommendations, which encompassed the entire education continuum, from early childhood through postsecondary, as well as federal accountability and coordination and research. According to the White House Initiative on Educational Excellence for Hispanic Americans, which provided the staff support and assistance to the commission and continues to work within the Department of Education, it is taking steps to implement the commission’s six recommendations and is working with the Department of Education, other federal agencies, and public and private organizations. In addition to federal government initiatives, Hispanic-serving organizations also have ongoing efforts to improve the educational attainment of Hispanics. According to LULAC, the organization has 16 counseling centers whose mission is to increase educational opportunities and attainment for Hispanic Americans through the development and implementation of programs in Hispanic communities throughout the United States. LULAC also reports that it provides educational counseling, scholarships, mentorships, leadership development, and literacy programs. According to its Web site, the National Council of La Raza (NCLR) works to build and strengthen community-based educational institutions, to improve the quality of instruction for Hispanic students, and to more effectively involve Hispanic families in the education of their children. NCLR reports that its education program services and activities are targeted to over 300 affiliated organizations while its education policy work addresses national issues in public education. NCLR also reports that it cochairs the Hispanic Education Coalition, an ad hoc coalition of national organizations dedicated to improving educational opportunities for Latinos living in the United States and Puerto Rico. Other organizations such as the Hispanic College Fund also work to provide college scholarships for Hispanic youth. In their respective oversight roles, both EEOC and OPM report representation levels of racial, ethnic, and gender groups overall and in subsets of the federal workforce and require that agencies conduct analyses of their own workforces. However, the benchmarks that EEOC, OPM, and agencies use to compare federal workforce representation levels to the CLF do not differentiate between citizens and noncitizens, and therefore do not identify how citizenship affects the pool of persons qualified to work for the federal government. Where differences in representation occur, such as within occupations or by grade, agencies are to determine if there are barriers to participation and, if so, develop strategies to address any barriers. OPM provides human resource guidance and resources to agencies to assist agencies in implementing these strategies. In its Annual Report on the Federal Workforce, prepared pursuant to its oversight responsibilities, EEOC provides data on the representation of racial, ethnic, and gender groups, including Hispanics, compared to the CLF overall, by senior pay and average grade level, and for selected agencies with 500 or more employees. To make its comparisons, EEOC uses the Census 2000 Special EEO File, which provides workforce data on the CLF. The Census 2000 Special EEO File is a special tabulation constructed by the U.S. Census Bureau according to the specifications of, and under a reimbursable agreement with, a consortium of agencies— EEOC, OPM, DOJ, and the Department of Labor (DOL). The Special EEO File, which has been prepared every 10 years since 1970 based on the Decennial Census, serves as the primary external benchmark to compare the racial, ethnic, and gender composition of each employer’s workforce to its available labor market. The datasets on the Census 2000 Special EEO Tabulation present data on race and ethnicity cross-tabulated by other variables such as detailed occupations, occupational groups, gender, worksite geography, residence geography, education, age, and industry. Data are available at the national level and by state, metropolitan area, county, and place. However, the Census 2000 Special EEO File data does not include citizenship data. According to a Census Bureau official, at DOJ’s request, the Census 2000 Special EEO File specifications originally included citizenship data for metropolitan statistical areas in four states for persons in the CLF 20 to 34 years of age, with 4 or more years of high school, by race and ethnicity. Because of narrow data specifications, concerns were raised about the privacy of Census respondents and the request was withdrawn. The consortium and Census are planning the 2010 Special EEO File, which will be based on 5 years (2005–2009) of American Community Survey (ACS) data—which is replacing the long form of the Decennial Census. Subsequent to the completion of our audit work, EEOC sent a letter requesting that the Census Bureau review the possibility of including citizenship data in the 2010 Special EEO File. According to the Census Bureau, citizenship data can be included but at an additional cost to consortium members based on the extent of data requested (e.g., geographic or occupational specificity) and amount of staff and programming resources to produce the requested data. In addition, the Census Bureau said that the extent of geographic or occupational specificity of citizenship data could be limited based on the risk of disclosing the identity of a respondent. Census Bureau officials also noted that because the 2010 Special EEO File will be based on a 5-year roll up of annual ACS data, current plans are to produce an updated Special EEO File every 5 years. OPM also presents data on Hispanic representation in its reports to the President under Executive Order No. 13171 and to Congress under the FEORP. In its Annual Report to the President on Hispanic Employment in the Federal Government, prepared pursuant to Executive Order No. 13171, and in Statistical Information on Hispanic Employment in Federal Agencies, OPM has included data on Hispanic representation overall, for each agency, by pay plan/group, and among new hires. The FEORP report compares overall representation levels in the federal workforce to the CLF and provides representation levels by pay group, in occupational categories, and within each agency. OPM also uses the Census 2000 Special EEO File when comparing representation of women and minorities within agencies to the relevant CLF (the labor force comprising only the particular occupations for the particular agency) for its FEORP report. However, in making comparisons of the demographic composition of the overall federal workforce to the CLF for the FEORP and the statistical reports on Hispanic employment, OPM has used the Current Population Survey (CPS). By using the CPS, OPM reports more-current CLF data than EEOC’s and reflects the changing composition of the CLF. At the time of our review, OPM was benchmarking to the September 2005 CPS, which showed Hispanic representation in the CLF to be 12.6 percent. In its Annual Report on the Federal Workforce, EEOC uses the 2000 Special EEO File as its benchmark showing Hispanic representation in the CLF to be 10.7 percent. Although using the CPS enables OPM to report more-current data on Hispanic representation in the CLF, OPM does not distinguish between citizens and noncitizens in its analysis of the CPS data. Figure 2 shows Hispanic representation in the permanent federal workforce compared to the CLF with and without noncitizens from 1994 to 2005, based on data from the CPS and OPM. These data show how citizenship affects the pool of Hispanics eligible for federal employment and that, when only citizens are considered in the CLF, Hispanic representation in both the federal workforce and CLF is more comparable. EEOC’s Mangement Directive 715 (MD-715) provides guidance and standards to federal agencies for establishing and maintaining effective equal employment opportunity programs, including a framework for agencies to determine whether barriers to equal employment opportunity exist and to identify and develop strategies to mitigate the barriers to participation. EEOC defines barriers as agency policies, principles, or practices that limit or tend to limit employment opportunities for members of a particular gender, race, or ethnic background, or based on an individual’s disability status. EEOC requires agencies to report the results of their analyses annually. The initial step is for an agency to analyze its workforce data with designated benchmarks. As part of this analysis, in addition to comparing the overall workforce to the CLF, EEOC instructs agencies to compare major (mission-related and heavily populated) occupations to the CLF in the appropriate geographic area in order to get a more accurate picture of where differences in representation may exist and to guide further analysis. Agencies may use the Census 2000 Special EEO File and the Census 2000 EEO Data Tool, which allows agencies to tailor the Special EEO File data in accordance with EEOC instructions. In their analyses, agencies may find that Hispanic representation in some of their major occupations is higher than in similar occupations in the CLF, but lower in others. Similarly, our review of data on the 47 occupations with 10,000 or more federal employees showed that Hispanic representation was higher in the 2005 federal workforce than the 2000 CLF in 22 of those occupations and lower in 25. (See app. IV.) EEOC also instructs agencies to analyze workforce data by grade level, applicants, new hires, separations, promotions, career development programs, and awards to identify where there may be barriers to participation. With respect to grade level, our review of data on Hispanic representation in the federal workforce showed that Hispanics are more highly represented in the lower grade levels than in higher grade levels (see app. IV). Our review was based on descriptive data and did not take into account citizenship, education, or other factors that can affect an individual’s placement in the federal government. When numerical measures indicate low representation rates, EEOC instructs that agencies conduct further inquiry to identify and examine the factors that contributed to the situation revealed by the data. Below is an example from EEOC’s MD-715 instructions of such an analysis to determine the existence of limits or barriers to participation. An agency has uncovered a lack of Black women in its program analyst occupation at the grade 13 level and above. However, below the grade 13 level the program analyst occupation is quite diverse, including a significant number of Black females. Further examination of the matter reveals that several years ago the agency instituted a requirement that program analysts hold a Masters of Business Administration (MBA) degree in order to be promoted to the grade 13 level or above. Few internal candidates, and none of the Black female program analysts employed by the agency, hold an MBA. Therefore, the agency was recruiting higher level program analysts from a local business school with a student population comprised of primarily White males. Over time, program analysts at the grade 13 and above did not reflect the racial diversity of the program analysts at the lower grade levels. First, the agency should re-visit the issue of whether the skill set represented by an MBA is available by some alternative means such as years of work experience in certain areas. This experience might be substituted for holding an MBA in rendering an applicant qualified for consideration for a higher-graded position. If it is determined that the agency’s requirement for an MBA is in fact job-related and consistent with business necessity, the agency should consider whether other alternatives exist which will have less impact on a particular group. Most obviously, the agency could recruit MBAs from other schools with more diverse student populations. In addition, the agency might consider steps it could take to facilitate its own lower-graded employees obtaining MBAs. Under OPM’s FEORP regulations and guidance under the Human Capital Accountability and Assessment Framework (HCAAF), agencies are also to analyze their workforces. Under FEORP, agencies are required to determine where representation levels for covered groups are lower than the CLF and take steps to address them. Agencies are also required to submit annual FEORP reports to OPM in the form prescribed by OPM. These have included (1) data on employee participation in agencywide and governmentwide career development programs broken out by race, national origin, gender, and grade level and (2) a narrative report identifying areas where the agencies had been most successful in recruiting, hiring, and formal training of minorities and women, and how they were able to achieve those results. The HCAAF, according to OPM, fuses human capital management with merit system principles and other civil service laws, rules, and regulations and consists of five human capital systems that together provide a consistent, comprehensive representation of human capital management for the federal government. According to recently proposed regulations, each system consists of standards against which agencies can assess their management of human capital and related metrics. The HCAAF practitioners guide outlines suggested performance indicators reflecting effective practices in meeting these standards. One suggested performance indicator, for example, is that agencies have systems that track and analyze workforce diversity trends in mission- critical occupations in order to continually adjust the agency’s recruitment and retention strategy to its current state of need. OPM Assistance to Agencies OPM provides assistance to agencies in recruiting Hispanics as part of broad-based recruitment efforts and developing employees onboard through (1) governmentwide outreach and recruitment initiatives; (2) providing information on student employment programs; (3) disseminating information on leading practices; and (4) providing guidance on training and development of employees. In 2003 and 2004, OPM held recruitment fairs in cities across the country, including those with high concentrations of Hispanics, such as Los Angeles, San Antonio, Tucson, Miami, and New York. Additionally, in 2005, OPM participated in 25 career fairs sponsored by others including LULAC, the National Association of Hispanic Federal Executives, and the University of New Mexico. Under its Veteran Invitational Program, launched in 2004, OPM has conducted career fairs, visited military installations and veterans’ medical facilities, and provided information on employment opportunities for veterans on its Web site. In 2004, OPM signed a Memorandum of Understanding with the American GI-Forum—an organization that works on behalf of Hispanic veterans—in support of Executive Order No. 13171. OPM has also taken steps to improve the USAJOBS Web site, the federal government’s official source for jobs and employment information. As part of its Recruitment–One Stop Initiative, launched in 2003, OPM reports that the Web site contains improved search capability options, a more user- friendly resume builder, and a streamlined job application process. USAJOBS also links to OPM’s Student Jobs Web site, which contains listings of federal student employment positions, and e-scholar, a listing of federal educational scholarships, fellowships, grants, internships, apprenticeships, and cooperative programs offered by federal departments and agencies and partnering organizations. The USAJOBS Web site provides information in both English and Spanish. According to OPM, student employment programs can help agencies recruit and develop talented employees to support agency missions; ensure that they can meet their professional, technical, and administrative needs; and achieve a diverse, quality workforce. OPM assists agencies on the use of student employment programs by issuing regulations and providing technical assistance through its Web site. There are three federal student employment hiring programs that can lead to noncompetitive conversion to permanent employment—the Student Career Experience Program (SCEP), Federal Career Intern Program (FCIP), and Presidential Management Fellows Program (PMF). Under SCEP, agencies may hire students as interns while they are pursuing high school diplomas or equivalent vocational or technical certificates, and associate’s, bachelor’s, graduate, or professional degrees. Upon completion of their degree program and SCEP requirements, agencies may noncompetitively convert participants to permanent employment. Recently revised SCEP regulations allow agencies to credit up to 320 hours of the 640 hours of career-related work experience required for conversion from active duty military service or from comparable nonfederal internship, work-study, or student volunteer programs where work is performed at federal agencies. Comparable work experience can include those internships sponsored by the Hispanic Association of Colleges and Universities’ (HACU) National Internship Program. The regulations also permit agencies to waive up to 320 SCEP hours of required work experience for students who have demonstrated exceptional job performance and outstanding academic achievement. Under FCIP, agencies may appoint individuals to 2-year internships in entry-level positions that would lend themselves to internal formal training/developmental programs. After 2 years, if program requirements are met, an agency can noncompetitively convert them to competitive civil service status. OPM issued final regulations on FCIP in 2005. The Presidential Management Fellows (PMF) Program is a 2-year internship program open to students who have completed graduate degree programs, been nominated by their school officials, and passed OPM’s assessment. In 2005, OPM issued final regulations implementing Executive Order No. 13318, issued in 2003, removing the cap on the number of PMF appointments, providing agencies greater flexibility in promoting fellows, and establishing training and development requirements. Other organizations have also realized that various intern programs provide valuable recruitment sources. According to the Partnership for Public Service, a nonpartisan organization dedicated to revitalizing public service, internship programs such as SCEP provide agencies a pool of diverse, tested, and easy-to-hire potential employees. Yet, the Partnership found that very few are drawn from the pool into permanent federal jobs. On the basis of the Partnership’s analysis of the rates at which SCEP program participants are converted to permanent federal employment, agencies may not be realizing the full potential of this program. The Partnership reported that in 2001, agencies converted 17 percent of SCEP participants to permanent federal employment, and in 2000, 11 percent. In contrast, the Partnership’s report stated that more than 35 percent of interns in the private sector accepted jobs with the companies for which they interned. While OPM has reported data on SCEP participants governmentwide by racial/ethnic group in its Fact Book and on SCEP new hires by agency in its statistical reports on Hispanic employment, OPM does not report demographic data on SCEP participants by agency and on FCIP and PMF participants governmentwide or by agency, or rates of conversion to permanent positions for SCEP, FCIP, and PMF either governmentwide or by agency. According to OPM, data on conversions to permanent employment by racial/ethnic group for SCEP and FCIP are available from the Central Personnel Data File (CPDF). Currently, OPM does not analyze these data. Similar data are available for the PMF. Analyzing data on conversion rates could provide OPM with valuable information on agencies that appear to be maximizing their use of these programs as well as those that are not fully utilizing them. With this information, OPM could then provide assistance to agencies to help them incorporate student employment programs into their strategic workforce planning as they seek to recruit and develop talented employees to support agency missions; ensure that they can meet their professional, technical, and administrative needs; and achieve a diverse, quality workforce. Such information from OPM could also enable agencies to perform more complete assessments of their programs. OPM disseminates leading-practices information through the reports it issues pursuant to FEORP and Executive Order No. 13171 and through the Interagency Task Force on Hispanic employment, chaired by the Director of OPM. In its annual FEORP reports, OPM presents a summary of agency practices on workforce planning, recruitment and outreach, mentoring, and career development based on the information agencies submit to OPM in their annual FEORP reports. In its Annual Report to the President on Hispanic Employment, OPM presents what agencies report as effective recruitment, outreach, career development, and accountability practices. To prepare the reports pursuant to the order, OPM annually asks agencies to submit information concerning steps taken related to these areas. OPM also shares information on leading practices at meetings of the Interagency Task Force. Through this guidance, OPM promotes broad outreach to all groups and encourages agencies to establish relationships with colleges and universities as a means to attract qualified candidates. Once onboard, training and development programs can assist employees in further developing skills and helping them qualify for higher-level positions. OPM provides guidance to agencies on its training and development Web page and has issued regulations on training and development tools available to agencies, such as academic degree and other employee training programs. In 2004, OPM finalized regulations on a training provision of the Chief Human Capital Officers Act of 2002 (Title XIII of the Homeland Security Act), which expanded agency authority to pay or reimburse employees for the cost of academic degree training when such training contributes significantly to meeting an identified agency training need, resolving an identified agency staffing problem, or accomplishing goals in an agency’s human capital management strategic plan. The five agencies in our review have taken a variety of approaches to address issues concerning Hispanic representation in their workforces, particularly in competing for a limited number of qualified candidates and addressing Hispanic representation at higher levels. At NASA, where Hispanics represented 5.3 percent of the workforce in 2005, one of the major occupations is aerospace engineering. There, Hispanics represented 5.0 percent of aerospace engineers, according to EEOC’s Annual Report on the Federal Workforce, 2004. In the CLF, Hispanics represented 4.6 percent of aerospace engineers, according to the Census 2000 Special EEO File. NASA said it must compete with the private sector for the pool of Hispanics qualified for aerospace engineering positions, which is often attracted by more-lucrative employment opportunities in the private sector in more-preferable locations. FNS, where Hispanics represented 7 percent of the workforce in 2005, reports that its ability to successfully recruit Hispanics was affected by low Hispanic representation in areas where some of its regional offices are located. Similarly, the USAF, with 7.4 percent of its workforce Hispanic in 2005, also reported difficulties in recruiting Hispanics at Wright-Patterson Air Force Base in Dayton, Ohio, where Hispanics represent approximately 2 percent of the local CLF, according to the USAF. Moreover, the USAF attributes, in part, the decrease in overall Hispanic representation levels (from 7.7 percent in 2000 to 7.4 percent in 2005) to the closure of Air Force bases in the southwestern United States where Hispanics were more highly represented than at other bases. Finally, agencies also reported that Hispanic representation in mid- and upper-level positions was an issue they were addressing. While both SSA, where Hispanics represented 12.5 percent of the workforce in 2005, and the SBA, where Hispanics represented 10.8 percent in 2005, reported success recruiting Hispanics for lower-level positions, each noted that Hispanic representation in certain mid- or upper-level positions was lower. The agencies reported using a variety of approaches that enhanced their ability to recruit and develop Hispanic employees. These included outreach to the Hispanic community and Hispanic-serving organizations, including participating in conferences sponsored by LULAC and others; recruiting at Hispanic-Serving Institutions—defined by statute as an eligible institution having an undergraduate enrollment of at least 25 percent Hispanic full- time students and at least 50 percent of the institution’s Hispanics students qualifying as low income; sponsoring interns through the HACU National Internship Program; use of student employment programs such as SCEP and FCIP; advertising in both English- and Spanish-language Hispanic media; and career development and training programs. Below we describe some of the specific approaches agencies in our study used to recruit and provide training and development opportunities for Hispanics. While data on the outcomes are limited and we have not assessed the effectiveness of these programs, the agencies reported that these approaches have enhanced their ability to recruit and develop qualified Hispanics. NASA—Part of NASA’s strategy to recruit Hispanics centers on increasing educational attainment, beginning in kindergarten and continuing into college and graduate school, with the goal of attracting students into the NASA workforce and aerospace community. NASA centers sponsor, and its employees participate in, mentoring, tutoring, and other programs to encourage Hispanic and other students to pursue careers in science, engineering, technology, and math. For example, the Marshall Space Center in Huntsville, Alabama, annually sponsors a Hispanic Youth Conference attended by students from across Alabama that includes workshops on leadership development and pursuing NASA career fields and provides opportunities to establish mentoring relationships. NASA also provides grants to fund educational support programs including in locations where there are high concentrations of Hispanics. For example, the Ames Research Center in Moffett Field, California, provided a grant for the development and implementation of a K-12 technology-awareness program designed to expose students to NASA and higher education through competitive team activities based on key aeronautic concepts. The program has been implemented in schools throughout California that have a high percentage of Hispanic students. Various centers also participate in high school and college internship programs, such as the Summer High School Apprenticeship Research Program where high school students spend 8 weeks working with engineers on scientific, engineering, mathematical, and technical projects. NASA centers also provide scholarships and research grants. For example, Ames provides scholarships to Hispanic college students at a community college and the Dryden Flight Research Center sponsors fellowships for students in engineering and science to continue their graduate studies. In addition, NASA has recently developed the Motivating Undergraduates in Science and Technology scholarship program designed to stimulate a continued interest in science, technology, engineering, and mathematics. USAF—To reach potentially qualified Hispanics from all areas of the country, the USAF outreach strategy focuses on partnering and improving working relationships with Hispanic-serving organizations at the national, regional, and local levels. At the national level, the USAF has established relationships with professional, educational, and broad-based Hispanic- serving organizations. For example, it signed a memorandum of understanding with LULAC agreeing to collaborate on, among other things, increasing USAF career opportunities. Through the Department of Defense partnership with HACU, the USAF participates in a national working group that meets semiannually to develop initiatives to expand recruitment at Hispanic-Serving Institutions. At the local and regional levels, the USAF has a variety of outreach efforts that involve both providing information to, and gaining feedback from, the Hispanic community. It works with organizations to educate potential employees on the application process. For example, Kirtland Air Force Base in New Mexico has sponsored “train the trainer” workshops with area organizations, high schools, and colleges and universities. The USAF also participates in programs working directly with local students, such as serving as mentors for Hispanic students. In addition, the USAF regularly provides vacancy announcements to, and has ongoing dialogues with, local Hispanic community organizations. Use of Student Hiring Authorities NASA—During fiscal year 2004, NASA implemented the corporate college recruitment initiative using FCIP hiring authority to recruit individuals to mission-critical positions. As part of this strategy, NASA participates in recruitment events at colleges and universities and conferences around the country, which it selects based on academic programs, diversity of attendee population, or involvement in NASA research. For each recruitment site, it invites academic institutions within reasonable geographical proximity, allowing it to maximize opportunities to reach students at Hispanic-Serving Institutions. In fiscal year 2004, 15 Hispanic- Serving Institutions participated from Arizona, California, Florida, New Mexico, New York, Puerto Rico, and Texas, which included universities with well-established engineering, science, and technology curricula. Prior to each event, NASA publishes event-specific vacancies and encourages students to apply in advance in order to create a pool of applicants from which to schedule interviews at the site. NASA reported that it was most successful in competing for top talent and filling critical competency positions at the earliest possible time when it extended job offers at the recruitment site or within 30 days after the conclusion of the recruitment visit. USAF—The USAF uses student employment programs to attract Hispanics and other qualified applicants for positions ranging from those requiring training at the vocational-technical schools to the graduate level. The USAF—which employs approximately half of the federal government’s civilian aircraft maintenance workers—has implemented the “Grow Your Own” aircraft maintenance program at three of its Texas bases. In partnership with vocational-technical schools, the program includes both on-the-job training and classroom education. It provides the USAF with a pool of trained candidates to replace retiring federal employees and a vehicle to increase Hispanic representation. Students are initially appointed through SCEP, and upon completion of the educational program and 640 hours of career-related work, students may be converted to permanent employment within 120-days without further competition. Using FCIP authority, the USAF hires recent college graduates into its PALACE Acquire and Copper Cap internship programs. The Copper Cap program is designed to train college graduates as contract specialists by assigning them to work with professional contracting officers. The PALACE Acquire program fills a variety of positions in approximately 20 career fields including logistics, civilian personnel, scientists and engineers, criminal investigator, intelligence specialists, public affairs, and education specialists. Participants may be promoted in 1-year intervals up to a certain level based on satisfactory or successful performance and are eligible for student loan repayment and tuition assistance for graduate school. SBA—The SBA’s District Director Candidate Development Program (DDCDP) is designed to recruit and develop a diverse group of highly qualified and trained managers at the General Schedule grade 13, 14, and 15 levels to fill district director positions on a noncompetitive basis as they become vacant. At the SBA, district director positions are key managerial career positions responsible for providing agency services to the small business community. The program is a 6- to 18-month development program and candidates who are competitively selected for, and successfully complete, the DDCDP program are eligible for noncompetitive selection for a period of 3 years from the time they have successfully completed the program. FNS—Since 2000, FNS has sponsored the Leadership Institute, which is a 15-month full-time leadership training program. The program focuses on five core competencies: leading change, leading people, achieving results, business acumen, and building coalitions/communications. Participants, who are competitively selected from grades 11–14, attend core seminars on such topics as leading teams, problem solving, and decision making and participate in individual and team projects. As of February 2006, there were 98 graduates from five classes. SSA—SSA sponsors national, headquarters, and regional career development programs for employees in grades 5 to 15. At the national level, the Leadership Development Program is an 18-month program designed to provide employees in grades 9 to 12 with developmental experiences through placement in designated trainee positions. The Advanced Leadership Program is an 18-month program designed to provide employees in grades 13 and 14 experience to become future agency leaders through rotational assignments, training, and other developmental experiences. Upon successful completion of these programs, participants receive a 3-year Certificate of Eligibility for a onetime, noncompetitive promotion, used at the discretion of SSA management. SSA also has a 12- to 18-month Senior Executive Service Candidate Development Program to prepare individuals in grade 15 or equivalent to assume senior executive- level responsibilities and develop their executive core qualifications. For employees in grades 5 through 8, SSA offers career development programs in its Office of Central Operations based in Baltimore and Office of Disability Adjudication and Review, which has regional and local hearing offices throughout the country. These, as well as other regional and headquarters component career development programs, are modeled after its three national programs for which employees are competitively selected. USAF—The USAF provides a variety of opportunities for current employees to increase their educational attainment through tuition assistance and degree completion programs, in-residence and distance- learning educational programs, and long-term academic programs. Its tuition assistance program covers mission-related coursework for designated positions toward degrees at a higher-level than the employee has already attained. Employees attend courses on a voluntary off-duty basis. Degree completion programs offer selected employees in designated career fields the opportunity to complete their degree during duty hours on a full- or part-time basis. In addition, the USAF also provides opportunities for employees to earn graduate degrees from its academic institutions, such as the Air Force Institute of Technology. Moreover, its professional military education programs—such as the Squadron Officer College and Air War College—are available for civilian employees depending upon grade level. These programs are offered in residence and by correspondence. Both provide opportunities for participants to earn credits toward degree programs. The USAF has obtained the recommendations on college credit for these and other courses and training programs from the American Council on Education’s (ACE) College Credit Recommendation service. ACE is an association of approximately 1,800 accredited, degree-granting colleges and universities as well as higher-education-related associations, organizations, and corporations. It reviews training programs and courses offered by government agencies and corporations and other training providers at the providers’ request and makes recommendations concerning the type of academic credit, if any, appropriate for the program. Approximately 1,200 accredited colleges or universities have agreed to consider ACE recommendations for courses, apprenticeship programs, and examinations, including community colleges and universities such as the University of California at Berkeley, George Washington University, and Indiana University, Bloomington. ACE has also recommended credit for various courses from NASA’s Academy of Program and Project Leadership that may be used toward a graduate degree. In response to our inquiry, the agencies included in our review reported three primary lessons important to the success of their efforts— commitment of agency leadership, taking a strategic workforce planning approach, and working with the Hispanic community: Commitment of agency leadership: Agencies reported that their programs were most successful when agency leadership was committed to addressing Hispanic representation. As we found in our prior work on diversity management, leaders and managers within organizations are primarily responsible for the success of diversity management because they must provide the visibility and commit the time and necessary resources. For example, SSA included diversity as part of its strategic and human capital plans and developed an agencywide marketing and recruitment strategy to address the representation of any underrepresented group, including Hispanics. Additionally, it tracks the outcomes of its recruitment and hiring initiatives. Strategic workforce planning: Agencies also recognized the importance of taking a strategic workforce planning approach in their efforts to recruit a diverse workforce. Strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. For example, NASA’s recruitment strategy focuses on both developing the pipeline to fill its mission-critical occupations by encouraging students to pursue degrees in science, technology, engineering, and math and attracting graduates into the NASA workforce and aerospace community. Additionally, SSA developed a business case for bilingual public contact employees in its field offices and bicultural employees in policy-making staff positions in its regional offices and headquarters components. Similarly, FNS said it began to realize the need for bilingual professionals, and as a result, has advertised positions requiring fluency in Spanish. Working with Hispanic communities: Finally, agencies told us that it is important to work with Hispanic communities to understand one another’s needs and find mutually beneficial solutions. The USAF at Kirtland Air Force Base in Albuquerque, New Mexico, has taken steps in this regard. In this geographic area where Hispanics represented 41.6 percent of the population according to the 2000 Census, the base has an alliance with the local public schools and colleges and universities to ensure that it is providing career and mentoring opportunities for area students and that schools are producing a pipeline of qualified students to meet base needs. Base representatives also work with the Hispanic Chamber of Commerce on issues pertaining to Hispanic communities. Providing federal agencies with benchmarks that consider citizenship would allow agencies to get a more accurate picture of differences in representation levels and more effectively identify and address barriers to equal employment opportunity. Current CLF benchmarks do not include citizenship; however, two annual official data sources—the CPS and ACS— are available that would allow EEOC and OPM to separate citizens and noncitizens in analyzing federal workforce representation by racial, ethnic, and gender groups. Additionally, agencies analyze their workforces using the Census Special EEO Files prepared at the direction of the consortium of agencies—EEOC, OPM, DOJ, and DOL. Although the 2000 Special EEO File did not contain citizenship data, EEOC and DOJ have expressed interest in and the need for including such data in the 2010 Special EEO File but must address issues including cost and privacy. As part of their barrier analyses, where representation differences between occupations in their workforces and similar ones in the CLF exist, agencies are to determine whether the qualifications established for those occupations are appropriate. Additionally, agencies are required to determine whether they have considered all sources of qualified individuals. OPM currently provides guidance to federal agencies on recruiting at colleges and universities. Because the majority of Hispanics enrolled in postsecondary education attend community colleges and vocational schools, identifying effective outreach practices to such schools could help those agencies that have occupations requiring the education and training provided at these institutions to meet workforce needs and further equal employment opportunity. OPM already shares effective recruiting practices through its Annual Report to the President under Executive Order No. 13171. OPM has recognized the importance of student employment programs, in particular SCEP, in providing a unique opportunity for agencies to recruit students from high school through graduate school, depending on agencies’ needs. These programs not only serve as a mechanism to address future federal workforce needs, they offer students the incentive to complete their education as well. OPM has provided data on SCEP new hires in its statistical reports on Hispanic employment and SCEP participants governmentwide in its Fact Book. While data on conversation rates for SCEP and FCIP are available from the CPDF, OPM does not analyze these data by agency or governmentwide. Such analyses could provide OPM with valuable information to help agencies maximize their use of these programs as part of their overall strategic workforce planning. Additionally, such information from OPM could enable agencies to perform more complete assessments of their programs. While federal agencies are taking steps to address Hispanic representation issues, as an employer, the federal government is limited in its ability to address the effects of citizenship and education on Hispanic representation throughout the federal workforce. As these are multifaceted issues, developing strategies to address them will require partnerships between Hispanic-serving organizations, federal agencies, state and local governments, educational institutions, and other interest groups. We recommend that the Director of OPM and the Chair of EEOC do the following: Include citizenship in their annual comparisons of representation in the federal workforce to the CLF. To help ensure consistency, both agencies should agree upon a single source of citizenship data. Work with other Consortium agencies and the Census Bureau to incorporate citizenship data into the 2010 Census Special EEO File and incorporate such data into analyses under MD-715, FEORP, and Executive Order No. 13171. We recommend that the Director of OPM do the following: Assess the extent of participation by racial and ethnic groups in student employment programs—SCEP, FCIP, and PMF—to help agencies maximize the use of these programs in their overall strategic workforce plan. This effort should include: analyzing participation in, and conversion rates to, permanent positions from these programs and reporting governmentwide and agency-specific demographic data for the different racial and ethnic groups reflecting participation in, and rates of conversion to, permanent employment from these programs. These data are in addition to the data already reported on these programs in its reports, such as in its statistical reports on Hispanic employment and in the Fact Book. We provided the Chair of EEOC, the Director of OPM, the Attorney General, and the Secretary of Commerce with a draft of this report for their review and comment. In an e-mail, DOJ said it had no comments. In a written response, the Department of Commerce said it had no comments. (See app. V.) In its written comments, EEOC said it found the report to be an extremely interesting and useful addition to the ongoing examination of Hispanic representation in the federal workforce and indicated its plans to use the report as a resource. EEOC agreed that citizenship data are an important aspect that appears applicable not only to Hispanics, but to other census population groups as well. In this regard, EEOC has requested that the Census Bureau review the possibility of including citizenship data in the 2010 Special EEO File. The availability of citizenship data would enhance the analyses required under MD-715. However, EEOC did not address our recommendation that it include citizenship data in its annual comparisons of representation in the federal workforce to the CLF, which can be based on currently available CPS or ACS data. EEOC also said that while citizenship data are a useful benchmark for broad trending, more refined analyses are necessary, including analyses of applicant pools and participation rates for specific occupations. EEOC also said that analysis of the on-board federal workforce, such as analysis of promotions and participation in career development, employee recognition, and awards programs, is important in assessing equality of opportunity. We agree with EEOC that more refined analyses are necessary to assess equality of opportunity. EEOC’s comments are reprinted in appendix VI. OPM provided minor technical comments via e-mail, which we incorporated as appropriate, but did not otherwise comment on the report or our recommendations. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We will then send copies of this report to the Chair of EEOC, the Director of OPM, the Attorney General, the Secretary of Commerce, and other interested parties. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9490. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made major contributions to this report are listed in appendix VII. Our objectives were to (1) identify and analyze the factors that are affecting Hispanic representation in the federal workforce, (2) examine the steps that the Equal Employment Opportunity Commission (EEOC) and the Office of Personnel Management (OPM), in their oversight roles, are taking related to Hispanic representation, and (3) illustrate the efforts within selected federal agencies related to Hispanic representation. To answer our first objective, we interviewed representatives from Hispanic-serving and other relevant organizations, and federal agency officials; reviewed previous studies; and obtained the opinions of experts identified by the National Academy of Sciences to identify possible factors that affect Hispanic representation in the federal workforce. Next, we researched available data sources that included sufficiently detailed data on Hispanic ethnicity, employer (federal or nonfederal), and the identified factors that could be reliably measured. We concluded that the 2000 Decennial Census Public Use Microdata Sample (PUMS) 5-Percent File was the best data source for our purposes. We conducted bivariate and multivariate analyses of data from the 2000 Decennial Census PUMS to determine the effect of the identified factors that could be reliably measured in this dataset on Hispanic representation in the federal workforce. Our methodology and results of these analyses are more specifically described in appendix II. We obtained opinions on our methodology from EEOC, OPM, the Census Bureau, and the Department of Justice (DOJ). The experts identified by the National Academy of Sciences also reviewed and provided comments on both our methodology for conducting these analyses and our preliminary results. Our analyses are not designed to prove or disprove discrimination in a court of law like analyses conducted by EEOC or DOJ, nor do they establish whether the differences would require corrective action by any federal agency. Rather, our analyses use a standard statistical method designed to provide information at an aggregate level about factors that explain levels of Hispanic representation in the federal workforce, relative to the nonfederal workforce. To determine steps EEOC and OPM have taken related to Hispanic representation, we reviewed the statutes, regulations, executive orders, policies, guidance, program information, and reports issued related to Hispanic representation in the federal government. At EEOC, we met with officials and representatives, including from its Office of Federal Operations, Office of General Counsel-—Research and Analytic Services, and Office of Legal Counsel. At OPM, we met with officials, including from the Human Capital Leadership and Merit System Accountability Division, Strategic Human Resources Policy Division, and the Office of General Counsel. To illustrate the efforts of federal agencies, we selected five Chief Financial Officer (CFO) Act agencies or their subagencies of different sizes, geographic locations, concentrations of jobs by grade level, and OPM’s occupational categories. They were the United States Air Force, Food and Nutrition Service of the U.S. Department of Agriculture, National Aeronautics and Space Administration, Small Business Administration, and Social Security Administration. We provided written questions and document requests to agency officials and reviewed the responses received from each of the five agencies. We also had discussions at each agency with officials that oversee offices and programs related to Hispanic representation. We also reviewed documents provided by, and spoke with officials from, the White House Initiative on Educational Excellence for Hispanic Americans. In addition, we analyzed Hispanic representation in the federal workforce governmentwide (1) compared to the Civilian Labor Force (CLF), including and excluding noncitizens; (2) in federal occupations compared to similar occupations in the CLF; and (3) by pay plan/grade. To compare Hispanic representation in the federal workforce governmentwide to the CLF, we used data from 1994 to 2005. For the federal workforce, we used data reported by OPM on the permanent federal workforce. For the CLF, which includes both permanent and nonpermanent employees, we analyzed the March supplements to the Current Population Survey (CPS)—the 1994–2002 Annual Demographic Files and the 2003–2005 Annual Social and Economic Supplements (ASEC). To compare Hispanic representation in federal occupations to similar occupations in the CLF, we selected the occupations which in September 2004 had 10,000 or more federal employees, 47 occupations in total (see app. IV). For this analysis, we included both permanent and nonpermanent federal employees for comparability to the CLF. For Hispanic representation in these occupations in the federal workforce, we analyzed the Central Personnel Data File (CPDF) for 2000–2005. For Hispanic representation in these occupations in the CLF, we analyzed the Census 2000 Special EEO File, which was created from the 2000 Census. To determine occupations that are similar in the CLF and the federal workforce, we used the crosswalk for 2000 provided to us by EEOC to match federal occupations with similar occupations in the CLF. To examine Hispanic representation by grade governmentwide, we analyzed 1990–2005 CPDF data for permanent and nonpermanent employees in groupings of General Schedule grades 1–4, 5–8, and 9–12, separately for grades 13, 14, and 15, and separately for those in the Senior Executive Service, in Senior Level/Senior Technical positions, and under the Executive Schedule. (See app. IV.) We believe the CPDF, CPS, and Census 2000 Special EEO File are sufficiently reliable for the purposes of this study. Regarding the CPDF, we have previously reported that governmentwide data from the CPDF for the key variables in this study—race/Hispanic origin, occupation, and pay plan/grade—were 97 percent or more accurate. We believe the CPDF data are sufficiently reliable for purposes of this study. Regarding the CPS, to assess the reliability of its data, we reviewed the technical documentation for these data files, including the coding and definition of variables of interest, the procedures for handling missing data, coding checks, and imputation procedures for missing data. We also interviewed Bureau of Labor Statistics (BLS) staff about how federal employment and race/ethnicity are reported and imputed and to determine how this would affect our analyses. We considered the response rate, allocation rate (or the rate at which responses are imputed for unanswered questions), and size of confidence intervals. Because the CPS had a very high response rate, a low allocation rate, and narrow confidence intervals, the 1994–2005 CPS data were sufficiently reliable. Regarding the Census 2000 Special EEO File, although we and others have cited a number of limitations of Census 2000 data, we believe these data are sufficiently reliable for the purposes of this study (see app. II for a full description of what we did to assess the reliability of Census data). We conducted our work from October 2004 to June 2006 in accordance with generally accepted government auditing standards. This appendix describes our analyses of factors that are affecting Hispanic representation in the federal workforce. We included those factors identified by representatives of Hispanic-serving organizations, agency officials, outside experts, and previous studies, which could be reliably measured in the data set we used. These factors were citizenship, gender, education, veteran’s status, race, English proficiency, age, disability status, in-school status, employment status (full- or part-time), and geography (state where employed). To assess the effect of these factors on Hispanic representation in the federal workforce, we analyzed how these factors affect the likelihood of Hispanics and non-Hispanics being employed in the federal workforce as opposed to the nonfederal workforce. We used logistic regression models to estimate likelihood of federal employment. This is a widely accepted method of analyzing dichotomous or binomial outcomes—like being in the federal versus nonfederal workforce—when the interest is in determining the effects of multiple factors that may be related to one another. In developing the model, we solicited the opinions of experts identified by the National Academy of Sciences as well as officials from OPM, EEOC, DOJ, and the Census Bureau. We also sought the experts’ views on the preliminary results of our analysis. We analyzed data from the 2000 Decennial Census Public Use Microdata Sample (PUMS) 5-Percent File because it (1) included variables needed for our analyses and (2) had the largest sample size of the datasets containing the variables in our analyses. To confirm our results, we also analyzed data from the 2004 American Community Survey (ACS) because it contains more recent data. In this appendix, however, we present only the results using the PUMS data because its larger sample size makes it less prone to sampling error than the ACS data. In addition, the results of the analyses of the ACS data were largely consistent with the results using the PUMS data. To assess the reliability of the PUMS and ACS, we reviewed the technical documentation for these data files, including the coding and definition of variables of interest, the procedures for handling missing data, coding checks, and imputation procedures for missing data. We also interviewed Census Bureau staff about how federal employment and race/ethnicity are reported and imputed and to determine how this would affect our analyses. We considered the response rate, allocation rate (or the rate at which responses are imputed for unanswered questions), and size of confidence intervals. Because PUMS and ACS both had a very high response rate, a low allocation rate, and narrow confidence intervals, the 2000 PUMS and 2004 ACS were sufficiently reliable for our objectives. The PUMS and ACS both contain self-reported data on whether someone is part of the CLF. The Bureau of Labor Statistics (BLS) defines the CLF as including persons 16 years of age and older residing in the 50 states and the District of Columbia, who are not institutionalized (i.e., in penal and mental facilities, or homes for the aged) and who are not on active duty in the Armed Forces. For purposes of our logistic regression models, we divided the CLF into two groups—the federal workforce and the nonfederal workforce. Further, we restricted our analyses to individuals 18 and older because, with a few exceptions, 18 years is the minimum age for federal employment and our analysis of the government’s official personnel data— the Central Personnel Data File (CPDF)—showed that in September 2004 individuals under 18 years of age constituted only 0.10 percent of the federal workforce. We used bivariate and multivariate logistic regression models to estimate the likelihood of Hispanics and non-Hispanics being in the federal workforce relative to being in the nonfederal workforce. There were four steps to these analyses. 1. For the first step, we used bivariate logistic regression models to estimate the difference between Hispanics and non-Hispanics in the likelihood of being employed in the federal workforce, relative to the nonfederal workforce, before controlling for any of the identified factors. 2. For the second step, we used bivariate logistic regression models to determine how our estimated difference in likelihood of Hispanics and non-Hispanics being employed in the federal workforce relative to the nonfederal workforce was affected by U.S. citizenship. We estimated the difference in likelihood between Hispanic citizens and non- Hispanic citizens being employed in the federal workforce relative to the nonfederal workforce and compared it to the difference in likelihood of federal employment among both citizens and noncitizens combined, obtained in step 1. We analyzed the effect of citizenship before all other factors because the federal government has a general policy and practice of restricting hiring to U.S. citizens and nationals. 3. For the third step, we restricted our analyses to citizens only and used a series of multivariate logistic regression models, controlling for each factor one at a time, to estimate how each of the other factors affected the difference in the likelihood of Hispanic citizens and non-Hispanic citizens being in the federal workforce relative to the nonfederal workforce. Because of the large effect of education on the difference between Hispanics and non-Hispanics that was revealed in this step, we ran a bivariate model that estimated the effect of education among all individuals—citizens and noncitizens combined—on the likelihood of being in the federal workforce relative to the nonfederal workforce. 4. In the fourth step, we used a multivariate logistic regression model that estimated the difference in the likelihood of Hispanic and non-Hispanic citizens being employed in the federal workforce versus the nonfederal workforce after controlling for all other factors simultaneously. Among citizens, we controlled simultaneously for gender, education, veteran’s status, race, English proficiency, age, disability status, school attendance (enrolled or not enrolled), employment status (full- or part- time), and geography (state where employed). In our analyses, we express differences in the likelihoods of being in the federal workforce rather than the nonfederal workforce using odds ratios. An odds ratio is generally defined as the ratio of the odds of an event occurring in one group compared to the odds of it occurring in another group—the reference or comparision group. In our analyses, the event of interest to us was employment in the federal workforce versus employment in the nonfederal workforce. We computed odds ratios to indicate the difference between Hispanics and non-Hispanics in the likelihood of being employed in the federal workforce (1) before controlling for any of the other factors, (2) after controlling for all of the factors one at a time, and (3) controlling for all factors simultaneously. In our analyses, an odds ratio of 1.0 would indicate that Hispanics and non- Hispanics were equally likely to be employed in the federal workforce as in the nonfederal workforce, or that the ratio of Hispanics to non-Hispanics was the same in the two workforces. An odds ratio of less than 1.0 would imply that Hispanics were less likely than non-Hispanics to be in the federal workforce as opposed to the nonfederal workforce, while an odds ratio greater than 1.0 would imply that Hispanics were more likely. For example, an odds ratio of 0.5 would indicate that Hispanics were only half or 50 percent as likely as non-Hispanics to be in the federal workforce as opposed to the nonfederal workforce. An odds ratio of 2.0 would indicate that Hispanics were twice as likely as non-Hispanics to be in the federal workforce as opposed to the nonfederal workforce. We also use odds ratios to indicate the effects of the other factors we considered (i.e., education, race, etc.), and they can be similarly interpreted. Given the large sample size of the PUMS file, all of the results reported are statistically significant at the 95 percent confidence level. Thus, we concentrated our analysis on the size or magnitude of the odds ratio—that is, how much smaller or larger than 1.0 they were—rather than the statistical significance of the odds ratios. We initially estimated the difference in the likelihood of Hispanics and non- Hispanics being employed in the federal workforce versus the nonfederal workforce before controlling for any of the identified factors. Table 4 shows the numbers, odds, and odds ratio derived from the PUMS to estimate the likelihood of Hispanics and non-Hispanics being employed in the federal workforce relative to being in the nonfederal workforce. The odds ratio of 0.698 indicates that the odds of Hispanics being in the federal workforce rather than the nonfederal workforce were about 30 percent lower than the corresponding odds for non-Hispanics. We calculated the odds ratio of 0.698 by first deriving the odds of being a federal employee rather than a nonfederal employee for both Hispanics and non-Hispanics. For Hispanics, we divided the number of the Hispanic federal employees by the number of Hispanic nonfederal employees, or 219,893/15,228,215, which equals 0.0144. This implies that the odds of being a federal employee among Hispanics were 0.0144; that is, there were 14.4 Hispanics who are federal employees for every 1,000 Hispanics who were nonfederal employees. For non-Hispanics, by comparison, the odds were 2,438,122/117,921,113 = 0.0207, which means that there were 20.7 non- Hispanics who were federal employees for every 1,000 non-Hispanics who are nonfederal employees. The odds ratio, or ratio of these two odds, which is 0.0144/0.0207 = 0.698, indicates that the odds on being a federal employee (i.e., represented in the federal workforce) were lower for Hispanics than non-Hispanics, by a factor of 0.698. We examined the effect of citizenship on the difference in the likelihood of Hispanics and non-Hispanics being employed in the federal workforce, relative to the nonfederal workforce, before examining the effect of all other factors because the federal government has a general policy and practice of restricting hiring to U.S. citizens and nationals. Table 5 shows the odds and odds ratio that are obtained when citizens only are used to estimate the likelihood of Hispanics and non-Hispanics being employed in the federal workforce relative to being in the nonfederal workforce. When these same odds and odds ratio were calculated for citizens only, the odds were similar (0.0200 and 0.0210), and the odds ratio of 0.953 implies that the odds of being a federal employee, among Hispanic citizens, were lower than for non-Hispanic citizens by about 5 percent. Comparing this to the odds ratio indicating the difference in the likelihood of Hispanics and non- Hispanics being employed in the federal workforce among the both citizens and non-citizens—0.698—indicates that citizenship accounts for much of the difference in the likelihood of federal employment between Hispanics and non-Hispanics, since the difference in the odds changes from about 30 percent to roughly 5 percent. To determine the effect of the remaining factors on likelihood of Hispanics and non-Hispanics being in the federal workforce relative to being in the nonfederal workforce, we restricted our analysis to U.S. citizens because the federal government has a general policy and practice of hiring only U.S. citizens. We then controlled for each of the other factors one at a time among U.S. citizens in a series of multivariate logistic regression models. Table 6 shows the odds ratios representing the difference between Hispanics and non-Hispanics in the likelihood of being employed in the federal workforce relative to the nonfederal workforce, when the other factors are controlled one at a time. The effect that each factor has on the difference between Hispanics and non-Hispanics in the likelihood of being in the federal workforce as opposed to the nonfederal workforce can be discerned by comparing each of the odds ratios in Table 6 to 0.95—the odds ratio indicating the likelihood of Hispanic and non-Hispanic citizens being employed in the federal workforce before controlling for the other factors. For example, as table 6 shows, controlling for differences in education—or estimating the effect of being Hispanic on the likelihood of being in the federal workforce after allowing for the differences in education between Hispanics and non-Hispanics—changes the odds ratio from 0.95 to 1.16. That is, among similarly educated workers, Hispanic citizens were more likely than non-Hispanic citizens, by a factor of 1.16, or 16 percent, to be in the federal workforce as opposed to the nonfederal workforce. Controlling for race, veteran status, and to a lesser extent age also changed slightly the estimated difference between Hispanic and non-Hispanics in the likelihood of being a federal employee. Because of the large effect of education on the difference between Hispanics and non-Hispanics, we also analyzed the effect of education among all individuals. The odds ratios indicating the differences in the likelihood of being in the federal workforce between workers who have some college, a bachelor’s degree, and more than a bachelor’s degree, relative to workers with a high school diploma, were 1.74, 2.15, and 2.69, respectively. In other words, each of those three categories of workers was almost twice as likely (1.74) or more than twice as likely (2.15 and 2.69) to be employed in the federal workforce relative to the nonfederal workforce as workers with only a high school diploma. Persons with less than a high school degree, by contrast, were less than half as likely as persons with a high school degree to be employed in the federal workforce relative to the nonfederal workforce. When we estimated the difference in the likelihood of being in the federal workforce between Hispanics and non-Hispanics using a multivariate model that accounted for all of the factors simultaneously among citizens, we found that the odds of being a federal rather than a nonfederal employee were higher for Hispanic citizens than for non-Hispanic citizens, by a factor of 1.24. That is, when all other factors we examined were controlled, the odds of being in the federal workforce relative to the nonfederal workforce were 24 percent higher for Hispanics than non- Hispanics. In response to comments from expert reviewers on a preliminary draft of these results, we conducted additional analyses to determine whether (1) our results were affected by the method we used to control for citizenship, (2) there was any difference between the effect of education for Hispanics and non-Hispanics, and (3) Hispanics’ odds of federal employment were affected by changing the reference group from all non- Hispanics to white non-Hispanics. First, we analyzed whether controlling for citizenship by excluding noncitizens produced different results than controlling for citizenship by including both groups in our model and introducing a control variable for citizenship status. We used a multivariate logistic regression model controlling for all the factors simultaneously among both citizens and noncitizens and controlled for citizenship status using a dummy variable (rather than excluding them). When we controlled for citizenship status using a dummy variable for citizenship status, the odds ratio indicating the difference between Hispanics and non-Hispanics in the likelihood of being in the federal workforce was 1.22, not appreciably different from the odds ratio of 1.24 reported above. Second, we analyzed whether the effect of education on being employed in the federal workforce was different for Hispanics and non-Hispanics. We used an interaction model, which allowed us to assess whether the effect of education on the odds of federal employment varied between Hispanics and non-Hispanics. This model revealed that while education affected the odds of federal employment for both Hispanics and non-Hispanics, the effect of education was generally more pronounced for Hispanics than non- Hispanics. For example, Hispanics with a bachelor’s degree were 2.27 times more likely to be employed in the federal workforce than Hispanics with a high school diploma. Among non-Hispanics, those with a bachelor’s degree were 2.04 times more likely than those with only a high school diploma to be employed in the federal workforce. Third, to analyze whether Hispanics’ odds of federal employment were affected by changing the reference group from all non-Hispanics to white non-Hispanics, we used dummy variables for race and ethnicity when comparing Hispanics, black non-Hispanics, and other nonwhite non- Hispanics, to white non-Hispanics as opposed to comparing Hispanics to non-Hispanics when controlling for all other factors. Including dummy variables for race and ethnicity yielded an odds ratio distinguishing Hispanics from white non-Hispanics of 1.55, which is greater than the odds ratio of 1.24 distinguishing Hispanics and non-Hispanics. The greater odds ratio resulted from black non-Hispanics and other nonwhite non- Hispanics—who were 1.82 and 1.89 times more likely to be employed in the federal workforce than white non-Hispanics—being taken out of the reference category. We did not analyze the effect of the interaction between race and Hispanic ethnicity; that is, comparing odds of federal employment among white Hispanics, black Hispanics, and other Hispanics because of differences in the reporting of race between Hispanics and non- Hispanics. Due to limitations in the data and the methods we used, we did not include in our analyses some variables that were identified during the course of our research that could potentially affect Hispanic representation in the federal workforce. We did not analyze whether discrimination against or attitudes towards Hispanics or any other group affected representation in either the federal or nonfederal workforces because, using our data sources, it was not possible to conduct such an analysis. We did not analyze Hispanic subgroup data because of concerns we expressed in our prior work and those expressed by the Census Bureau and outside researchers. Additionally, some factors identified were not asked on the Census and we could not identify an adequate proxy suitable for our methodology; we cannot say how, or if, these factors would affect the results of our analyses. Variables for which we could not control include experience in a particular occupation, number of years naturalized U.S. citizens have been citizens of the United States, and an individual’s preference for employment in either the federal or nonfederal workforce. Additionally, we did not control for Standard Metropolitan Statistical Area or other geographical units smaller than states because these would result in sample sizes too small to control for the full range of factors. For foreign-born respondents, we did not control for years since arrival in the United States because the data were insufficiently reliable. Finally, we could not control for how unemployment affects the likelihood of being in the federal workforce because unemployment perfectly predicts not being in the federal workforce; however, unemployed individuals are considered part of the CLF. Additionally, with respect to race—one of the factors for which we controlled—some have suggested that many Hispanics view race differently than non-Hispanics and consider their ethnicity as a separate racial category. Such differences in the perception of race could affect our estimates on the effect of race on the likelihood of Hispanics and non- Hispanics being employed in the federal workforce relative to the nonfederal workforce. According to the U.S. Census Bureau, among Hispanics in the 2000 Decennial Census, 47.9 percent reported themselves as white, 2.0 percent as black, 1.2 percent as American Indian/Alaska Native, 0.3 percent as Asian, 0.1 percent as native Hawaiian and Other Pacific Islander, 6.3 percent as two or more races, and 42.2 percent as some other race. Among non-Hispanics, 79.1 percent reported themselves as white, 13.8 percent as black, 0.8 percent as American Indian/Alaska Native, 4.1 percent as Asian, 0.1 percent as native Hawaiian and Other Pacific Islander, 1.9 percent as two or more races, and 0.2 percent as some other race. Some studies suggest that the difference in the percentage of “other race” responses between Hispanics and non-Hispanics—42.2 and 0.2— reflects many Hispanics’ view that their race is Hispanic, rather than one of the racial categories listed in the Census. Additionally, while assessing the reliability of the PUMS for our analysis, we found that the number of federal employees reflected in the PUMS was larger than the number reported in either OPM’s Central Personnel Data File (CPDF) as of September 2000 or OPM’s report Employment and Trends (March 2000). In the PUMS there were about 2,658,000 federal employees (excluding the Postal Service) compared to slightly less than 2 million reported by OPM for 2000 in either of its sources. There was also a similar discrepancy in 2004, with nearly 2 million federal employees reported by OPM (CPDF as of September 2004, Employment and Trends, March 2004) compared to about 2,757,000 identified in the ACS. Although we were unable to fully account for these differences, we did identify some known sources for lower numbers of federal employees reported by OPM. Neither of OPM’s data sources include (1) federal employees working for the intelligence agencies such as the Central Intelligence Agency, National Security Agency, National Geospatial- Intelligence Agency, and Defense Intelligence Agency; (2) most personnel on federal installations paid from non-appropriated funds, such as workers in military commissaries; and (3) those in the Commissioned Corps of the Public Health Service and National Oceanic and Atmospheric Administration. In addition, OPM’s CPDF data do not include judicial and some legislative branch employees and employees of the Tennessee Valley Authority. Another potential source of the difference in the number of federal employees is that employees of federal contractors who work at federal agencies or on military installations might have responded on the Census that they were employees of the federal government. Several experts who commented on our methodology and results expressed a similar view. To assess whether our results were affected by the difference in the number of federal employees in the PUMS and CPDF datasets, we substituted the federal employees from the CPDF for the federal employees in the PUMS. Our analysis, using the combined CPDF and PUMS data, confirmed that citizenship and education accounted for the difference in likelihood of Hispanics and non-Hispanics being employed in the federal workforce. Given these, the large sample size of PUMS, the high response rate to the Census 2000 long form that is the basis for PUMS, and the quality control measures Census uses in collecting the PUMS data, we believe our reported results are sound and the conclusions we reached are reasonable. Like reported federal employment in PUMS, reports of citizenship in self- reporting surveys may be inflated. As we lacked benchmark data to assess the potential effect of misreporting of citizenship, we cannot say if or how the results would be affected by such misreports. Additionally, because we used data from a single census, we cannot make statements regarding future trends in the estimates. For example, changes in the number or geographic distribution of Hispanics might affect the likelihood of federal employment in future censuses. Finally, our results are limited and intended to only reflect the effect of selected factors on Hispanic employment in the overall federal workforce and cannot be applied to individual occupations, grades, agencies, or other subsets of the federal government. We attempted to analyze the effect of selected factors on the federal occupations that employed 10,000 or more federal employees in 2004 and similar occupations in the nonfederal workforce, but we found that our results were not reliable. First, sample size within job categories is much smaller and subject to much greater sampling variability than in the full data set. Sample sizes this small preclude controlling for the full range of factors considered in our model. Second, PUMS data and our models cannot account for specific skills and certification, which might be particularly relevant for a given occupation. For example, the education categories do not distinguish between a bachelor’s degree in chemistry or in English literature. Third, we could not account for the specific career paths required for certain occupations or those that can only be obtained on the job. For example, job seekers with a background in policing may be more qualified to be a federal officer. Fourth, we could not account for individuals who may be qualified for a given occupation, but holding a different one. For example, some of the individuals coded as accountants may be qualified to be financial specialists, a separate occupation. Restricting the sample to financial specialists might result in an understated pool of qualified workers. Various authorities have restricted hiring for most federal employment to U.S. citizens and nationals. Under Executive Order No. 11935, only U.S. citizens and nationals may be appointed into competitive service positions. In 2005, 72 percent of executive branch employees were in the competitive service. In rare cases, noncitizens may be appointed when necessary to promote the efficiency of the service, such as if an agency is unable to find a qualified citizen to fill a position (5 C.F.R. §7.3(c) and §338.101). Such appointments, however, must also be in compliance with other laws on federal hiring of noncitizens. For decades, Congress has passed an annual ban on the use of appropriated funds for compensating federal employees who are not U.S. citizens or nationals. Broader in scope than the Executive Order, the appropriation ban applies to all compensable positions within the federal government, not just to competitive service positions. There are exceptions to this ban that permit the compensation of non-U.S. citizens who are from certain countries or under special circumstances. For example, South Vietnamese, Cambodian, or Laotian refugees paroled in the United States after January 1, 1975, are excluded from the ban. Also, citizens from Ireland, Israel, or the Republic of the Philippines, or nationals of countries “allied with the United States in a current defense effort” are excluded from coverage of the appropriation ban. Even though the appropriation ban may not apply under a particular circumstance, the hiring of a noncitizen may nevertheless be prohibited because the position is within the competitive service and covered by the Executive Order ban. Congress has excluded some agencies (or certain types of positions within some agencies) from the restrictions on hiring or compensating noncitizens. For example, the Department of Defense is excluded from restrictions on employment and payment of noncitizens. This page is left intentionally blank. In addition to the contact named above, Belva M. Martin, Assistant Director; Carl S. Barden; Jeffrey A. Bass; Benjamin A. Bolitzer; Karin K. Fangman; Anthony P. Lofaro; Anna Maria Ortiz; Rebecca Shea; Douglas M. Sloane; Tamara F. Stenzel; and Gregory H. Wilmoth made major contributions to this report.
Hispanic representation in the federal workforce has historically been lower than in the Civilian Labor Force (CLF). Understanding factors affecting representation is important to developing and maintaining a high-quality and inclusive workforce. In this report, GAO identifies and analyzes factors affecting Hispanic representation in the federal workforce, examines oversight roles of EEOC and OPM, and provides illustrations of selected federal agencies' efforts with respect to Hispanic representation. GAO constructed a multivariate logistic regression model, with advice from experts, to determine how factors affected the likelihood of Hispanics and non-Hispanics being in the federal versus nonfederal workforce. GAO's analyses are not intended to and do not show the existence or absence of discrimination in the federal workforce. U.S. citizenship and educational attainment had the greatest effect, of the measurable factors we identified, on Hispanic representation in the federal workforce. Our statistical model showed that when accounting for citizenship, required for most federal employment, Hispanics were nearly as likely as non-Hispanics to be employed in the federal workforce, relative to the nonfederal workforce (the portion of the CLF excluding federal employees). In addition, the federal workforce has a greater proportion of occupations that require higher levels of education than the CLF. When we compared citizens with similar levels of education, Hispanics were more likely than non-Hispanics to be employed in the federal workforce relative to the nonfederal workforce. Other factors in our model, including age, gender, race, veteran's status, English proficiency, and geography (state where employed), had a more limited or almost no effect on the likelihood of Hispanics being in the federal workforce. In addition to reporting and comparing representation levels overall and in subsets of the federal workforce to the CLF, EEOC and OPM require that agencies analyze their own workforces. However, the CLF benchmarks of representation that EEOC, OPM, and the agencies use do not differentiate between citizens and noncitizens, and therefore do not identify how citizenship affects the pool of persons qualified to work for the federal government. Where these analyses identify differences in representation, EEOC, for example, requires agencies to determine if there are barriers to participation and develop strategies to address them. OPM provides resources and guidance to assist agencies in implementing human capital strategies. Through these efforts, OPM has promoted the use of student employment programs as a source of qualified candidates. Analyzing agency use of these programs, including the extent to which agencies convert participants to permanent employment, could provide OPM with valuable information to assist agencies in maximizing the use of these programs in their strategic workforce planning. The agencies we reviewed use a variety of approaches to address Hispanic representation, including recruiting at colleges and universities with large Hispanic populations, publicizing employment opportunities in Hispanic media, reaching out to Hispanic communities and Hispanic-serving organizations, and using student employment, internship, career development, and training programs. For example, the U.S. Air Force partners with vocational-technical schools to develop aircraft maintenance technicians, and staff at selected National Aeronautics and Space Administration facilities mentor and tutor students to encourage careers in science, technology, engineering, and math.
The government acquisition landscape was reformed by several legislative changes in the 1990s, such as the Clinger-Cohen Act of 1996 and the Government Management Reform Act of 1994. The Clinger-Cohen Act authorized creation of GWACs, which are typically multiple-award contracts for information technology that allow an indefinite quantity of goods or services (within specified limits) to be furnished during a fixed period, with deliveries scheduled through orders with the contractor. The providing agency awards the contract, and other agencies order from it. OMB was authorized by the Clinger-Cohen Act to designate agency heads as executive agents for GWACs. Some agencies had already established information technology contracts prior to the OMB designation. However, according to agency officials, the OMB designation is beneficial to them because it enables them to provide a streamlined contracting process, it creates opportunities to leverage the buying power of customer agencies, and it helps them market their contracting services. Table 1 shows the year in which agencies received OMB’s designation. The Government Management Reform Act of 1994 authorized the Director of OMB, in consultation with congressional committees, to designate six franchise fund pilots that would operate as fully self-supporting business- like entities within the federal government to compete for the delivery of common administrative support services to federal customers. Franchise fund programs provide administrative services such as contracting, systems operation, and payroll processing, in addition to information technology. Interior’s program, GovWorks, provides contracting services for a wide range of goods and services. The Schedules program, part of GSA’s Federal Supply Service, provides federal agencies with a streamlined process to obtain commonly used products and services at prices associated with volume buying. Information technology is the biggest business line in the Schedules program. Interagency purchases of information technology from the Schedules program exceed those made from all GWAC programs combined. GWACs, franchise fund pilot programs, and the Schedules program charge fees for services with the intent to recover costs. Fees are based on known costs, estimates of future costs and revenues, and consideration of the prices charged by the competition for similar services. Figure 1 is an illustrative depiction of the factors that agencies consider when setting fees. A detailed description of each agency’s program, financial results, fee structure, and services appears in appendixes VII through XIII. All of the programs we reviewed except the Commerce and Transportation GWACs reported revenue in excess of costs for one or more fiscal years between 1999 and 2001. Table 2 shows reported earnings based on financial statements for the contract programs. Starting in 1999, OMB required that agencies with GWACs should identify, account for, and recover fully allocated actual costs in accordance with federal financial accounting standards. Actual costs include direct costs, such as labor and materials, and indirect costs, such as rent and support services. However, agencies do not consistently report revenues and costs in accordance with OMB’s guidance. They have developed their own approaches to accounting and to reporting program costs, and these approaches are evolving as the agencies make periodic changes. OMB requires each GWAC agency to submit a semi-annual report of its activities. However, OMB has not required annual financial summaries of program results that would include a description of the agencies’ indirect cost allocation methodologies and provide an entire year’s worth of information on program results. Accordingly, OMB was unaware that not all agencies are reporting revenues and costs in accordance with its guidance. Further, while GSA identifies, allocates, and reports actual costs for both its GWAC program and the Schedules program, other agencies’ records are not as complete. We found instances of incomplete identification and allocation of indirect costs, partial reporting of program results, and overstated indirect costs, as shown in the examples below. Without more complete information on the costs of interagency contract services, there is no assurance that fees accurately reflect costs. NASA does not include any costs for rent, utilities, contract support, or program management in the account that summarizes GWAC costs. Further, NASA components do not pay a fee for using the GWAC because of an agencywide practice of not charging fees to internal users of NASA’s own contracts. Consequently, both the costs recorded in the GWAC account and GWAC revenues are understated. NASA officials noted that NASA is making an in-kind contribution to the program by not charging administrative costs, and that this contribution is sufficient to ensure that external customer fees are not subsidizing NASA’s own use of the GWAC program. However, NASA provided us only a rough analysis, prepared in 1999, of the costs and potential revenues involved. NASA stated that it intends to periodically reassess its financial contribution to the GWAC program. NIH’s GWAC financial results do not include some indirect costs for support services provided by the NIH Office of the Director, such as acquisition policy, budget services, and equal opportunity programs. In addition, the fiscal year 2001 financial results, prepared by NIH’s financial office, reported GWAC earnings of $57,837, an understatement due to two factors. First, reported revenues from NIH’s internal customers were not combined with revenues from external customers. If internal and external revenues had been combined as one line item, reported earnings would have increased to $268,219. Second, the program was overcharged by $729,870 for indirect costs, including rent and utilities, because of an accounting error. NIH officials informed us that corrective actions have been taken on both problems for fiscal year 2002. However, NIH officials do not plan to identify or allocate additional Office of the Director’s costs, because they do not believe it would be cost-effective to do so. Transportation’s GWAC operates within the Transportation Administrative Service Center and is allocated a portion of the center’s indirect costs. Indirect costs allocated to the program have fluctuated substantially from year to year. Such fluctuations significantly impact reported program operating results. For example, the GWAC’s indirect costs jumped by more than 90 percent in fiscal year 2001, because the indirect cost allocation was based on an estimated GWAC sales volume that was not realized. This allocation was not adjusted at the end of the year to reflect actual sales. If actual sales had been used, the indirect costs allocated to the GWAC would have been about $600,000 lower and would have substantially reduced the program’s reported loss of about $1 million that year. Program officials restructured their fees for fiscal year 2002, in part due to prior year losses. Full costing is also a key principle of the franchise fund pilot programs. OMB’s guidance states that the operation should be self-sustaining and that fees should fully recover costs. Interior’s progress in identifying and recovering full costs has evolved over time. However, program officials have not fully allocated indirect costs at the department level. The legislation authorizing GWACs was silent with respect to how agencies should account for financial transactions under the contracts; for example, how to obligate funds for the contract and how to account for revenue. Thus, agencies administering GWACs were left to their own devices when determining whether these financial transactions would be accounted for through existing revolving funds or in stand-alone accounts. The GWACs at NIH, Transportation, and the Federal Technology Service operate under revolving funds, while NASA and Commerce operate their GWACs in stand-alone reimbursable accounts. OMB guidance on earnings stipulates that (1) GWAC fees should be adjusted so that total revenues do not exceed actual costs and (2) revenues generated in excess of the agency’s actual costs are to be transferred to the miscellaneous receipts account of the U.S. Treasury’s General Fund. However, the way agencies operate their GWACs under revolving funds conflicts with OMB’s guidance. Agency officials told us that they have accounted for GWAC revenue in the same manner that the law authorizes them to account for revenue from other programs in their revolving funds. Thus, they have used earnings generated by some products and services—including GWACs—to offset losses incurred by other products and services. Further, they are permitted to retain earnings in their revolving funds and use those earnings for authorized purposes of the fund, unless the law governing operation of the fund requires them to transfer amounts to the Treasury. Agency officials maintain that their fund legislation prevails over the OMB guidance where there is a conflict between the two. OMB officials told us that they plan to review this issue. The different approaches GWAC programs have taken when revenues exceeded costs are discussed below: From fiscal years 1999 to 2001, NIH reported revenues in excess of costs from its GWAC operations. For the 3 years combined, the GWACs’ $4 million of earnings offset $3.6 million in losses in other revolving fund acquisition programs. For fiscal year 2001, the most recent year for which actual costs are available, reported GWAC earnings of $268,219 offset other programs’ losses of $116,590. NIH lowered its fee for orders placed with its small business contractors for the two GWACs awarded in fiscal year 2001. The fee for orders with larger businesses did not change. Within its revolving fund, the Federal Technology Service’s IT Solutions program manages GWACs and provides other information technology services to federal agencies. The program’s earnings are used to provide resources for future investment based on revolving fund plans approved by OMB. Losses within segments of the program are offset against earnings in other programs or covered by using retained earnings from this fund. For example, $3.6 million in earnings generated by GWACs in fiscal year 2001 offset losses in some other business lines, in particular the information security program. NASA does not have a revolving fund and, therefore, its GWAC operates in a stand-alone account. NASA records show that for revenues received in fiscal years 1999, 2000, and 2001, NASA’s GWAC accounts had year-end balances of $688,247, $1,106,155, and $573,114, respectively. NASA’s practice has been to carry over balances remaining from one fiscal year to the next. However, NASA now intends to revise its current practice and to obligate funds in support of its GWAC in the fiscal year received, to the extent possible. NASA lowered its fees in fiscal years 1999 and 2000, and raised them for fiscal year 2001, when it awarded a new version of its GWAC. Other interagency contracting services we reviewed allow the providing agency to retain funds. For example, franchise fund legislation allows Interior’s franchise fund to retain an amount not to exceed 4 percent of the total annual income for the acquisition of capital equipment and other specified uses. The fund under which the GSA Schedules program operates is allowed to retain earnings for specific purposes, as discussed below. The fee charged by the Schedules program has consistently generated revenue well in excess of costs. From fiscal year 1999 to 2001, the revenue generated by fees exceeded program costs by 53.8 percent, or $151.3 million. Program customers are, in effect, being overcharged for the contract services they are buying. Nevertheless, program officials have not adjusted the fee. Because the program has been highly profitable since 1997, we analyzed the use of revenues in excess of costs over the past 5 years. From 1997 to 2001, the program reported $210.8 million in earnings. Figure 2 shows earnings and costs during this period. GSA records show that it used the $210.8 million in earnings as follows: $192 million was used to support other programs, primarily GSA’s fleet and stock programs. Support of the fleet program primarily involved financing the procurement of vehicles. Support of the stock program primarily involved offsetting substantial losses in fiscal years 2000 and 2001. The revolving fund legislation allows earnings to be used for these purposes. $4.4 million of fiscal year 1998 earnings was transferred to the miscellaneous receipts account of the General Fund of the Treasury. GSA has not yet made a decision on how to use $14.4 million of Schedules program earnings from fiscal year 2001. The Schedules program fee was established at 1 percent in 1995. According to GSA officials, the program was intended to break even, with the fee recovering program costs including contract administration and program support. GSA officials explained that the profitability of the Schedules program is much greater than expected due to the inclusion of the information technology schedule and its dramatic growth. For fiscal years 1997 through 2001, information technology revenues grew 287 percent, and this program now comprises about two-thirds of all Schedules program sales. In 1999, the GSA Inspector General recommended that the fee be adjusted to bring it in line with costs, noting that for two years the program had been generating nearly twice the revenue needed to cover program costs.While GSA generally concurred with the recommendation, it did not implement a change in the fee at that time due to concerns about the administrative cost and the time such an action would entail. GSA told the Inspector General that it was not practical to take action until it was confident that the fee would be stable for an extended period of time. Despite an additional 3 years of similar earnings, GSA has taken no action to bring its fee in line with costs. GSA maintains that it still has not experienced marketplace stability sufficient to accurately forecast the Schedules business volume. Further, GSA officials stated that adjusting the fee would be burdensome for the thousands of Schedules contractors. They said that one key obstacle is that the 1 percent fee is embedded in the unit cost of the goods and services on the Schedules. Our review showed that some other interagency contract programs, such as NIH’s and NASA’s GWACs, have established their fees as add-ons to the price of goods and services. This approach gives them the flexibility to change their fees without affecting the unit price of their goods and services and provides transparency to customers on the fee being paid. OMB has expressed concern about the large earnings the Schedules program has generated. With a 3-year restructuring of its business lines nearing completion, and recognizing the need for flexibility in setting Schedules program fees, GSA is now considering options to design a flexible fee adjustment. GSA plans to work with OMB to identify alternatives to the current pricing structure in the development of the President’s fiscal year 2004 budget request. The increasing use of interagency contract programs makes it imperative that Congress and federal agencies receive reliable information on the fees charged and earnings generated by these programs. However, some agencies are not identifying, determining accurately, or recovering the full costs of their programs as directed by OMB. Thus, there is no assurance that the fees they are charging accurately reflect their costs. Further, because some agencies have not submitted to OMB complete annual financial results, OMB is not receiving clear information on how earnings have been used and whether fees were adjusted accordingly. OMB needs better information so that it can more easily identify management weaknesses when they arise and work with GWAC agencies to overcome them. The conflict between the way agencies are operating their revolving funds—using GWAC earnings to support other programs—and OMB’s guidance on the handling of earnings is a matter of concern. The agencies have not brought the problem to OMB’s attention. In its monitoring and oversight role over the GWAC program, OMB needs to determine how this conflict can be addressed. Despite consistently high earnings in the Schedules program, GSA has not adjusted the 1 percent contract service fee it charges customers. Program customers are, in effect, being consistently overcharged for the contract services they are buying, while GSA is using excess earnings to support other programs. We believe that the fee should be adjusted to reflect costs more closely. We recommend that the director of OMB ensure that GWAC executive agents comply with OMB guidance on full cost accounting in establishing their fees. direct GWAC executive agents to provide OMB with (1) annual financial reports containing costs and revenues that summarize annual program results and the need for any fee adjustments and (2) a discussion of how earnings have been used. work with GWAC executive agents to address the handling of GWAC earnings, including appropriate disposition of funds and adjustment of fees. Also, we recommend that the administrator of GSA adjust the Federal Supply Schedules program fee to reflect costs more closely. We received written comments on a draft of this report from OMB, GSA, NASA, NIH, and the Department of the Interior. The Department of Transportation offered technical comments, which we incorporated as appropriate. OMB noted that its general framework on fee policies and accounting practices is well-founded, but that additional attention is needed to ensure that its guidance is being followed effectively. OMB stated that it intends to work with OMB’s Office of Federal Financial Management and the agencies to evaluate appropriate revisions to its reporting requirements on fees so that disparities between fees charged and costs incurred can more easily be identified and addressed. OMB also intends to work with GWAC executive agents and the GSA’s Federal Supply Service to address the handling of excess revenues generated by their programs, including appropriate disposition of funds and adjustment of fees. OMB also provided oral comments, and we made revisions to the text as appropriate. OMB’s letter appears in appendix II. GSA took exception to our statement that the Schedules program produced “exceptionally high earnings” from fiscal years 1999 through 2001. We believe that this characterization is warranted, based on the fact that revenues exceeded costs by more than 53 percent or $151 million during this period. GSA also commented that “the statement that profits from the Schedules program are being held at too high a level in order to offset losses in another program is incorrect.” We revised the text to indicate that earnings from the Schedules program were used to offset losses in the stock program and to finance vehicle purchases for the fleet program. GSA also stated that it does not seem very practical to compare the much smaller numbers of contracts at NASA and NIH with the number of Schedules contracts that would have to be renegotiated if the fee were adjusted. Our intent was to point out that because the fee add-on mechanism is used by other agencies, it may be one option GSA could consider in adjusting its fee. Finally, while agreeing that the current fee mechanism lacks the flexibility to match costs and revenues over time, GSA pointed out the complexity of such an undertaking and the desire to minimize the impact on customers, contracting partners, GSA, and the Schedules program itself. We acknowledge the complexity of implementing a flexible fee structure. However, given that the program has consistently reported earnings well in excess of costs for several years, we believe steps need to be taken now to begin the process of adjusting the fee. GSA also offered technical comments, which we have incorporated as appropriate. GSA’s letter appears in appendix III. The Department of the Interior stated that the information and recommendations in our report provide OMB helpful guidance for oversight of a growing interagency program. The Department noted that the reported operating results provided for fiscal year 2000 reflect a $488,000 processing error, which the franchise fund program is correcting. We have reflected this information in Table 2 and in appendix XII. An additional technical comment has also been incorporated. The Department of the Interior’s letter appears in appendix IV. NASA characterized as misleading the statement in our draft report that NASA had not prepared earnings statements for its GWAC program. In fact, while NASA provided semi-annual reports to OMB for fiscal year 2001, it had not prepared financial statements for the GWAC program, and the data available from the program were incomplete for financial statement purposes. In responding to our draft report, NASA prepared the financial results that accompany its comments. These annual results are substantially different than the semi-annual earnings results that NASA had reported to OMB for fiscal year 2001. On a combined basis, the semi- annual reports showed a loss of $235,817, whereas the annual financial results showed that the program had earnings of $646,645. We have incorporated the latest results into table 2 and appendix IX. NASA also provided additional details on its rationale for not assessing costs to NASA customers for use of the GWAC and asserted that NASA has not used assessments against other agencies to cover its share of the administrative costs. We have reflected these points in the report. NASA also stated that program personnel conducted a “deliberative analysis” of the costs involved. However, program personnel provided us with only a rough analysis, prepared in 1999, to support the cost assessment. NASA plans to periodically reassess the apportionment of NASA and non-NASA costs and NASA’s in-kind contribution versus the fees paid by external customers. NASA also elaborated on its rationale for carrying over balances remaining from one fiscal year to the next. It now plans to revise this practice and to obligate, to the extent possible, funds in the fiscal year they are received. Recognizing that NASA’s lack of authority for a working capital fund has caused concerns about the authority under which it manages its GWAC, NASA has proposed legislation to establish a fund for the agency in fiscal year 2003. Finally, NASA asserts that the Economy Act provides authority for NASA to receive funds and apply those funds over periods of time, including across fiscal years, in order to support its GWAC program.NASA’s letter, with attachments, appears in appendix V. NIH commented that our report will enable the agency to continue to improve its information technology services and strengthen oversight of these services to both NIH and other federal agencies. NIH noted that the GWAC program office will continue to strive to comply with and promote OMB’s reporting requirements for GWACs. NIH also offered technical comments that we incorporated as appropriate. NIH stated that revenues (and thus earnings) were not understated to OMB because revenues from internal customers were included in semi-annual reports to OMB. However, those revenues were not attributed to the GWAC program in NIH financial statements, which were prepared by NIH’s financial office. Further, on a combined basis, NIH’s semi-annual reports to OMB showed a loss of $814,629, whereas the annual financial results showed that the program had earnings of $268,219. NIH’s letter appears in appendix VI. As requested by your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to other interested congressional committees and to the Secretaries of Commerce, Health and Human Services, Interior, and Transportation; the Administrator, GSA; the Administrator, NASA; and the Administrator of OMB’s Office of Federal Procurement Policy. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-4841. An additional contact and other key contributors are listed in appendix XIV. We focused our review on all five agencies granted executive agent status by the Office of Management and Budget (OMB) to provide governmentwide acquisition contracts (GWACs) for information technology. The five agencies with such authority are the General Services Administration (GSA), the National Institutes of Health (NIH), the Department of Transportation, the National Aeronautics and Space Administration (NASA), and the Department of Commerce. In addition, we collected similar information about the GSA Schedules program and the primary contract service program within the Department of the Interior’s franchise fund pilot program. Interagency purchases of information technology made from the GSA Schedules program exceed those made from all GWAC programs combined. Interior’s GovWorks acquisition program is the largest component of the Department of the Interior’s franchise fund. To examine the fees being charged, we identified reported revenues and costs. We also reviewed the fee structure and how it changed during fiscal years 1999 through 2001. We reviewed agency financial statements and annual reports for fiscal years 1999 through 2001, as well as the supporting revenue and cost data for each program, the OMB executive agent designation and financial management guidance, the contract activity reports submitted to OMB, the Statement of Federal Financial Accounting Standards Number 4: Managerial Cost Accounting Concepts and Standards for the Federal Government developed by the Federal Accounting Standards Advisory Board, and relevant legislation. We did not independently verify the accuracy of the operating results reported for each program. We interviewed and obtained information from officials in the contract program and financial offices at the Departments of Commerce, Transportation, and Interior; NIH; NASA; and GSA. We also held discussions with officials in OMB’s Office of Federal Procurement Policy and Office of Federal Financial Management. To determine provider agencies’ ability to retain earnings, we reviewed relevant legislation for each program. We interviewed contract program managers and financial officials at the Departments of Commerce, Transportation, and Interior; NIH; NASA; and GSA. We also held discussions with officials in OMB’s Office of Federal Procurement Policy and the offices of the inspector general at the Departments of Transportation and Interior and at GSA. To assess the agencies’ compliance with OMB’s guidance regarding the use of earnings, we reviewed financial reports and held discussions with program officials regarding funds transferred to the miscellaneous receipts account of the General Fund of the U.S. Treasury. We conducted our review from May 2001 to June 2002 in accordance with generally accepted government auditing standards. The Commerce Information Technology Solutions (COMMITS) program provides the Commerce Department and other federal agencies with a means of awarding performance-based information technology services from 56 small business contractors. The principal goal of COMMITS is to provide an alternative governmentwide acquisition contract (GWAC) that allows agencies to contract with small and minority-owned businesses for information technology requirements. The COMMITS program is designed to accomplish three objectives: (1) deliver information technology services and solutions to meet government organizations’ missions, (2) deliver information technology services and solutions using a streamlined, performance-based acquisition methodology, and (3) provide a pool of small business contractors capable of delivering the government’s information technology requirements. To date, the Department of Commerce’s National Oceanic and Atmospheric Administration, the Environmental Protection Agency, and the Department of Defense’s Army Research Laboratory have spent the most money under COMMITS. COMMITS is a 5-year multiple-award indefinite delivery, indefinite quantity contract, which permits issuance of task orders with options that may extend performance for an additional 5 years beyond the original performance period. The ceiling amount is $1.5 billion for services in Information Systems Engineering, Information Systems Security, and Systems Operations and Maintenance. The COMMITS contract allows for the following types of contracts: firm-fixed price, fixed-price with incentive, cost plus fixed fee, cost plus award fee, cost plus incentive fee, labor hours, and time and materials. Table 3 shows reported annual operating results for COMMITS. COMMITS program officials told us that fees are reviewed annually to ensure that total revenues do not exceed actual costs. The COMMITS program office collects fees directly from the customers through an interagency agreement. The fees shown in tables 4 and 5 are applied to the value of task orders placed by program customers. The Commerce Department’s Annual Performance Plan (1999) addresses mission objectives including increasing opportunities for small, small minority, and women-owned small businesses. A major initiative in Commerce’s contracting program was to establish a multiple award governmentwide indefinite delivery, indefinite quantity contract among highly qualified small disadvantaged, small disadvantaged 8(a), and women-owned small businesses. On June 21, 1999, OMB designated the Department of Commerce an executive agent for the acquisition of information technology for the COMMITS program. The Federal Technology Service’s (FTS) IT Solutions business line offers a full range of information technology products and services in support of customers’ missions worldwide. Pre-award services include technical assistance such as requirements analysis and proposal development and acquisition services that include developing an acquisition strategy, conducting the acquisition, signing contracts, and providing legal support, if needed. Post-award services include project management such as managing milestones, schedules, and costs; performing problem resolution and overseeing progress reviews; and financial management services that include managing project funding and accepting and paying vendor invoices. FTS has nine GWACs, and it uses four solution development centers (SDC) to operate them. In addition, FTS’s Federal Systems and Integration Management Center (FEDSIM) provides technical and acquisition expertise to agencies including access to GWACs and other types of contracts. The Federal Computer Acquisition Center (FEDCAC) operates the first six GWACs listed in table 6 below. Its core business line is the repackaging of proven industry solutions that are delivered via contracts to meet the emerging technology needs of a specific client agency or for governmentwide use. FEDCAC generated over $200 million in orders in fiscal 2001. The ANSWER SDC, which operates the ANSWER GWAC, contracted for $195.7 million in business in the last fiscal year. The Small Business SDC specializes in contracts with small businesses. The center, which has contracts with over 150 small business contractors, generated $200.4 million in fiscal year 2001. The Information Technology Acquisition Center (ITAC) manages the Millennia Lite GWAC, which covers four functional areas: 1) information technology planning, studies, and assessment, 2) high- end information technology services, 3) mission support services, and 4) legacy systems migration and new enterprise systems development. Millennia Lite generated $126.3 million in fiscal year 2001. FEDSIM’s program officials provide technical and acquisition expertise. Center personnel can use a variety of contracts, including those offered by other agencies, GSA’s Schedules contracts, and the GWACs operated by FTS. Table 6 contains a brief description of each GWAC. Table 7 shows reported annual operating results for the FTS GWACs. FTS’s SDCs charge customers two forms of fees: contract access fees and consulting fees. With some exceptions, an access fee of 1 percent covers the cost of administering the contracts. The disaster recovery contract is one of the exceptions, with a fee of ½ percent. The access fee is included in the contractors’ prices, and they remit the fee revenue to FTS. The access fee has remained steady at 1 percent. Consulting fees are paid directly to FTS. The centers and FEDSIM charge an hourly rate for technical expertise. For example, FEDCAC and FEDSIM rates ranged from $74 to $125 per hour in fiscal year 1999, from $75 to $125 per hour in fiscal year 2000, and from $85 to $141 per hour for fiscal year 2001. Customers and FTS enter into a memorandum of understanding or an interagency agreement with FTS that outlines the level of support required, the estimated cost to provide the support, and other reporting and contractual elements. Fees are developed to recover full costs and are effective for the entire fiscal year. Rate changes during the year are rare. According to program officials, the fees are reviewed annually. On August 2, 1996, GSA became the first agency to receive an executive agent designation by OMB under the Clinger-Cohen Act. Both FEDSIM and FEDCAC were specifically identified in this designation. FEDCAC evolved from the Air Force Computer Acquisition Center, which had been in existence for over 20 years. FEDCAC was incorporated into the GSA in August of 1991. FEDCAC was chartered to provide acquisition assistance on a fee-for-service basis to agencies whose technical requirements exceeded $100 million. ITAC is the newest SDC. It became fully operational in fiscal year 2001, along with the Millennia Lite GWAC. NASA’s governmentwide acquisition contract (GWAC) is the Scientific and Engineering Workstation Procurement (SEWP) contract. The current GWAC, SEWP III, supports NASA’s objective of meeting its own requirements for high-performance information technology, as well as similar needs in other agencies. NASA provides technical expertise in developing SEWP contracts in areas such as electronic data interchange, web and imaging technology, order processing, and technology refreshment. NASA’s role as the agent between the federal agencies and the prime contractors is accomplished by three major ordering processes: 1) pre- order decision-making, which allows users to check prices on-line for all of SEWP’s contracts and to track quotes requested from vendors; 2) delivery order processing, which includes receiving delivery orders, checking for accurate information, and entering order information into SEWP’s database; and 3) post-order quality assurance, which includes a quality assurance check with agency customers on product delivery, product functionality, and overall customer satisfaction. The program currently includes 12 prime contracts serviced by 8 prime contractors. The largest SEWP customers are the Air Force, the Army, and the Navy. SEWP III is a fixed-price, indefinite delivery, indefinite quantity contract with a maximum value of $4 billion. The initial set of SEWP III contracts were awarded on July 30, 2001. The term of the contracts is 5 years. The contract specializes in providing advanced technology UNIX, Linux, and Windows-based workstations and servers, along with peripherals, network equipment, storage devices, and other information technology products. Table 8 shows reported annual operating results for SEWP. The SEWP III fees shown table 9 below are applied to the value of purchases made by program customers. The fee is included as a separate contract line item on contract orders. This fee is collected by the contractors and forwarded to the government quarterly. Fees are reviewed each year and adjusted based on a comparison of revenues and costs. Fees are not charged to NASA customers because of an agency policy against charging fees for internal use of NASA-based contracts. However, NASA noted that it is making an in-kind contribution by not charging some costs to the program, such as providing the contracting personnel to set up and administer the SEWP contracts, the SEWP program manager, and office space. NASA does not charge the Environmental Protection Agency a fee because a representative from that agency serves on the SEWP executive committee. NASA’s efforts to consolidate its procurement of high-end information technology products date back to the early 1990s. NASA’s first SEWP contract was awarded in 1991 as a NASA-only procurement. Within a year, it became a governmentwide contract at the request of the General Services Administration (GSA). The most recent GSA delegation of authority for the SEWP contract, effective through November 14, 2000, was issued in 1995, prior to the passage of the Clinger-Cohen Act of 1996. On September 29, 2000, OMB designated NASA as an executive agent for governmentwide acquisition of information technology. The National Institutes of Health National Information Technology Acquisition and Assessment Center (NITAAC) is the organizational focal point for the three governmentwide information technology contracts NIH offers. NITAAC is part of the Office of Administration, which is located in the Office of the Director, NIH. NITAAC’s goals include providing NIH and other agencies with quality information technology products and services that focus on emerging technologies and solutions. In addition, NITAAC seeks to simplify the information technology procurement process for internal and external clients, as well as for contractors, by encouraging the use of its on-line ordering system to improve communication between clients and contractors and to reduce the paperwork burden. NITAAC provides a variety of client services. For example, NITAAC reviews each task order request to determine if it is within the scope of the contract and to ensure that the statement of work and potential contractors are well suited to one another. Quality assurance at the contract level is performed by reviewing contractors’ monthly status reports and by analyzing customer orders and feedback on program and policy changes. NITAAC offers mediation services to customer and contractors for GWAC orders when problems occur during contract administration. NITAAC’s GWACs are serviced by over 100 prime contractors. The largest customers for fiscal year 2001 were the Army, Treasury, and NIH. NITAAC’s three GWACs are described in table 10. Table 11 shows NIH’s reported annual operating results for its GWACs. Table 12 below lists the fees paid by NITAAC’s customers external to NIH. The fees are applied to the value of orders placed by program customers and are included as a line item on those orders. The contractors receive the fees and forward them to NIH. While the 1 percent fee was retained for the two 10-year GWACs awarded in fiscal year 2001, NITAAC introduced a sliding scale of lower fees for small business orders. NITAAC reduced its fee in this manner to further promote the use of its small business contractors. Internal customers are charged a flat fee per order submitted. NITAAC reviews its fees annually. NITAAC recently received authority to accept funds from other agencies through inter-departmental agreements. For these customers, NITAAC not only awards customer orders but administers them as well. NITAAC charges an additional fee of 1.5 percent to handle these agreements. NIH has been managing all three information technology contracts since 1996, when the original IW and CIO-SP contracts were awarded under the authority of its Service and Supply Fund (42 U.S.C 231). The original ECS contract was awarded on September 29, 1995. The Department of Transportation’s governmentwide acquisition contract (GWAC), ITOP, operates under the Transportation Administrative Service Center (TASC). ITOP has awarded contracts to 35 prime vendors— comprising a mixture of small disadvantaged, small, and large businesses—who offer a broad range of support resources related to information technology. Initiated to streamline government procurements of information technology, ITOP is supported by a group of multiple pre- awarded contracts. The three top customers are the Department of Defense’s Department of the Army and Joint Strike Fighter Program Office, and the Federal Bureau of Investigation. On May 20, 2002, the Deputy Secretary of Transportation informed the Director of the Office of Management and Budget that Transportation would not be seeking redesignation as a GWAC executive agent beyond June 3, 2002. The Secretary stated that two issues must be resolved before the Department can determine if a long-term extension of GWAC authority is warranted. First, while early numbers for the first half of fiscal year 2002 show that ITOP has been recovering its costs, more data are needed to ensure continued self-sufficiency. Second, the Department is in the process of determining the extent to which ITOP can address the information technology needs of the new Transportation Security Administration. The Secretary stated that meeting the Transportation Department’s in-house information technology requirements must now be its priority. ITOP offers a 7-year indefinite delivery, indefinite quantity task order contract providing information systems engineering, systems operations and management, and information systems security to satisfy customer requirements. The contract provides for the following types of orders: firm fixed price, cost plus fixed fee, cost plus award fee, and time and materials. The current contract, referred to as ITOP II, provides for a maximum of $10 billion for information technology solutions. ITOP II has an individual task order delivery ceiling of $300 million. The first contract, ITOP, provided for a total of $1.13 billion, with an individual task order ceiling of $50 million. Table 13 shows reported annual operating results for ITOP. ITOP’s program office reassesses its fees periodically to ensure continued competition with other agencies and to ensure that the program recovers costs. The customer pays the fee directly to the ITOP program office using an interagency agreement or other funding instrument. The fees shown in tables 14 and 15 below are applied to the value of task orders placed by program customers. ITOP adjusted its fee structure in 2001 to better reflect the level of effort and costs of providing services and to address prior-year losses. In fiscal year 2002, TASC reduced the indirect cost rate it charges ITOP by 40 percent. The TASC indirect cost rate reduction (fixed-fee overhead) has already saved ITOP about $600,000 through June 2002. A Transportation official noted that the ITOP’s total revenues have exceeded costs for the first 9 months of fiscal year 2002. The ITOP program office received both the Department of Transportation’s approval and the General Services Administration’s delegation of procurement authority for its multiple pre-awarded indefinite delivery, indefinite quantity contract in August 1995. ITOP received its first OMB executive agent delegation in January 1999. As discussed previously, ITOP’s executive agent delegation expired on June 3, 2002, and the Department of Transportation decided not to seek redesignation at that time. Interior’s Minerals Management Service manages the GovWorks program, which is the largest component of the Interior franchise fund. This fund is located in Interior’s Office of the Secretary. The GovWorks program offers a wide range of acquisition services, such as buying high-dollar products and services and awarding grants and cooperative agreements. Program services include project planning, soliciting and evaluating offers, administering contracts and agreements through closeout, and paying all bills. Clients also receive assistance with project management activities, such as preparing statements of work and tracking expenditures. GovWorks procurements are not limited to any specialized area. The program offers acquisition services in a wide range of areas, such as information technology, environmental studies, training systems development, secure communications, engineering and technical studies, joint military program support, and healthcare support services. In fiscal year 2001, GovWorks had contracts with about 300 contractors. GovWorks’ largest customers are the Department of Defense, the Department of Health and Human Services, and the Department of State. The acquisition services that GovWorks provides to external customers are processed through Interior’s franchise fund. Similar service projects for internal customers are accounted for by the Minerals Management Service separately from the franchise fund. Because GovWorks is a general-purpose acquisition service, it can access other agencies’ governmentwide acquisition contracts and GSA’s schedules contracts, in addition to preparing its own contracts. GovWorks has awarded indefinite delivery, indefinite quantity contracts, and multiple-award contracts covering areas such as training and education systems, construction management, and telecommunications infrastructure support. Table 16 shows reported annual operating results for Interior’s franchise fund program. GovWorks establishes its fee for the franchise fund at the beginning of the project based on an assessment of the amount of assistance needed for the planned procurement. The fee is set as a percentage of the dollar value of the project. The base fee is 3 percent, but it can range from 2 to 4 percent. The fee is paid by the customer agency directly to the Interior franchise fund. The GovWorks program employs 34 full-time-equivalent personnel, all of whom are Interior employees. In May 1996, OMB designated the Department of the Interior as one of six executive branch agencies authorized to establish a franchise fund pilot program. Franchise funds were authorized by the Government Management Reform Act of 1994. The GovWorks program began operation in 1997 as part of Interior’s franchise fund. The General Services Administration’s (GSA) Federal Supply Service (FSS) organization offers a supply and procurement business under the Federal Supply Schedules Program (Schedules program), which provides federal customers with services from more than 7,400 program vendors, as well as a wide range of commercial products. The services provided by the Schedules program include accounting, graphic design, financial, information technology, environmental, and landscaping, along with a vast array of brand-name products from office supplies to systems furniture and computers. The services and products are provided at volume discount pricing on a direct-delivery basis. Negotiated prices for varying requirements and all vendor-awarded contracts are included in a catalogue of 48 schedules. The value of information technology orders are larger than the orders in all other schedules combined. The intent of the Schedules program is to offer customers shorter lead- times, lower administrative costs, and reduced inventories; provide significant opportunities for agencies to meet their small business goals; and promote compliance with socioeconomic laws and regulations. GSA reports that the external agencies with the largest Schedules program orders are the Department of Defense, the Department of Veterans Affairs, and the Department of Justice. Under the Schedules program, GSA awards contracts to multiple companies that supply comparable products and services. These contracts can be used by any federal agency to purchase commercial products and services. The current standard Schedules contract is for a 5-year period with three 5-year options. Table 17 shows reported annual operating results for the Schedules program. GSA’s fee, known as the Industrial Funding Fee, is intended to fully recover the cost of operations. In fiscal year 1995, the Schedules program started to become self-supporting. The Schedules program established a 1 percent fee, which is remitted by the vendor to GSA. The fees shown in table 18 are applied to Schedules purchases by program customers. In 1993, the House Committee on Appropriations recommended that GSA review the benefits of providing supplies and equipment on a full cost- reimbursable basis. Also in 1993, a Conference Committee for the 1994 Treasury, Postal Service and General Government Appropriations Act stated that federal agencies should be allowed a choice of purchasing from the Schedules program or from the commercial sector. Further, in a 1994 report, the Senate Appropriations Committee stated that the Schedules program was suitable for reimbursable funding under the general supply fund. In 1995, GSA’s Federal Supply Service began the process to convert the Schedules program to operation on a cost-reimbursable basis. In addition to the individual named above, Penny A. Berrier, Paul M. Greeley, and John Van Schaik made key contributions to this report. Richard T. Cambosos, Mark P. Connelly, and Denise M. Fantone served as advisors.
Federal interagency contract service programs are being used in a wide variety of situations, from those in which a single agency provides limited contracting assistance to an approach in which the provider agency's contracting officer handles all aspects of the procurement. This increased use of interagency contracts is a result of reforms and legislation passed in the 1990s, allowing agencies to streamline the acquisition process, operate more like businesses, and offer increasing numbers of services to other agencies. Most of the contract service programs GAO reviewed reported an excess of revenues over costs in at least one year between fiscal years 1999 and 2001. Office of Management and Budget (OMB) guidance directs agencies with governmentwide acquisition contracts (GWAC) or franchise fund programs to account for and recover fully allocated actual costs and to report on their financial results. Agencies are to identify all direct and indirect costs and charge fees to ordering agencies based on these costs. However, some GWAC programs have not identified or accurately reported the full cost of providing interagency contract services. OMB's guidance further directs that agencies return GWAC earnings to the miscellaneous receipts account of the U.S. Treasury's General Fund. However, this guidance conflicts with the operations of agencies' revolving funds, which were established by statutes that allow retention of excess revenues. The Federal Supply Schedules program has generated hefty earnings, largely because of the rapid growth of information technology sales. Rather than adjust the fee, however, the General Services Administration has used the earnings primarily to support its stock and fleet programs. However, the significant amount of earnings means that Federal Supply Schedules program customers are being consistently overcharged for the contract services they are buying.
In an effort to increase homeland security following the September 11, 2001, terrorist attacks on the United States, President Bush issued the National Strategy for Homeland Security in July 2002 and signed legislation creating DHS in November 2002. The strategy set forth the overall objectives, mission areas, and initiatives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that may occur. The strategy also called for the creation of DHS. The department, which began operations in March 2003, represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. Although the National Strategy for Homeland Security indicated that many federal departments (and other nonfederal stakeholders) will be involved in homeland security activities, DHS has the dominant role in implementing the strategy. The strategy identified six mission areas and 43 initiatives. DHS was designated the lead federal agency for 37 of the 43 initiatives. In addition, DHS had activities underway in 40 of the 43 initiatives. In addition, DHS has the dominant share of homeland security funding. Figure 1 shows the proposed fiscal year 2006 homeland security funding for federal departments and agencies, with DHS constituting about 55 percent of the total. The November 2002 enactment of legislation creating DHS represented a historic moment of almost unprecedented action by the federal government to fundamentally transform how the nation protects itself from terrorism. Rarely in the country’s past had such a large and complex reorganization of government occurred or been developed with such a singular and urgent purpose. This represented a unique opportunity to transform a disparate group of agencies with multiple missions, values, and cultures into a strong and effective cabinet department whose goals are to, among other things, protect U.S. borders, improve intelligence and information sharing, and prevent and respond to potential terrorist attacks. Together with this unique opportunity, however, came a significant risk to the nation that could occur if the department’s implementation and transformation was not successful. GAO designated DHS’s transformation as high-risk in January 2003based on three factors. First, DHS faced enormous challenges in implementing an effective transformation process, developing partnerships, and building management capacity because it had to effectively combine 22 agencies with an estimated 170,000 employees specializing in various disciplines— including law enforcement, border security, biological research, computer security, and disaster mitigation—into one department. Second, DHS faced a broad array of operational and management challenges that it inherited from its component legacy agencies. In fact, many of the major components that were merged into the new department, including the Immigration and Naturalization Service, the Transportation Security Administration, Customs Service, Federal Emergency Management Agency, and the Coast Guard, brought with them at least one major problem such as strategic human capital risks, information technology management challenges, or financial management vulnerabilities, as well as an array of program operations challenges and risks. Finally, DHS’s national security mission was of such importance that the failure to effectively address its management challenges and program risks could have serious consequences on our intergovernmental system, our citizen’s health and safety, and our economy. Overall, our designation of DHS’s transformation as a high-risk area and its inclusion on the 2003 High-Risk List was due to the failure to transform the diverse units into a single, efficient, and effective organization would have dire consequences for our nation. Since our 2003 designation of DHS’s transformation as high-risk, DHS leadership has provided a foundation for maintaining critical operations while undergoing transformation. DHS has worked to protect the homeland and secure transportation and borders, funded emergency preparedness improvements and emerging technologies, assisted law enforcement activities against suspected terrorists, and issued its first strategic plan. According to DHS’s performance and accountability report for fiscal year 2004 and updated information provided by DHS officials, the department has accomplished the following activities as part of its integration efforts: reduced the number of financial management service centers from 19 to 8, consolidated acquisition support for 22 legacy agencies within 8 major procurement programs, consolidated 22 different human resources offices to 7, and consolidated bank card programs from 27 to 3. As described in the next section, despite real and hard-earned progress, DHS still has significant challenges to overcome in all of its management areas. It is because of these continuing challenges that we continue to designate the implementation and transformation of DHS as high-risk. DHS faces a number of management challenges to improving its ability to carry out its homeland security missions. Among these challenges, which are discussed in more detail in the following sections, are providing focus for management efforts, monitoring transformation and integration, improving strategic planning, managing human capital, strengthening financial management infrastructure, establishing an information technology management framework, managing acquisitions, and coordinating research and development. One challenge that DHS faces is to provide focus on management efforts. The experience of successful transformations and change management initiatives in large public and private organizations suggests that it can take 5 to 7 years until such initiatives are fully implemented and cultures are transformed in a substantial manner. Because this timeframe can easily outlast the tenures of managers, high-performing organizations recognize that they need to have mechanisms to reinforce accountability for organization goals during times of leadership transition. Focus on management efforts needs to be provided at two levels of leadership. The first level is that of the political appointees in top leadership positions. These leaders are responsible for both mission and management support functions. Although DHS has been operating about 2 years, it has had two Secretaries, three Deputy Secretaries, and additional turnover at the Undersecretary and Assistant Secretary levels. The problem of turnover in top leadership is not unique to DHS. The average tenure of political leadership in federal agencies—slightly less than 3 years for the period 1990-2001—and the long-term nature of change management initiatives can have critical implications for the success of those initiatives. The frequent turnover of the political leadership has often made it difficult to obtain the sustained and inspired attention required to make needed changes. Similarly, the recent turnover in DHS’s top leadership raises questions about the department’s ability to provide the consistent and sustained senior leadership necessary to achieve integration over the long term. Another level for focus on management efforts is those leaders responsible for day-to-day management functions. As we have reported, a Chief Operating Officer (COO)/Chief Management Officer (CMO) may effectively provide the continuing, focused attention essential to successfully completing these multiyear transformations in agencies like DHS. At DHS, we have reported that the COO/CMO concept would provide the department with a single organizational focus for the key management functions involved in the business transformation of the department, such as human capital, financial management, information technology, acquisition management, and performance management, as well as for other organizational transformation initiatives. We have also recently testified that a COO/CMO can effectively provide the continuing, focused attention essential to successfully complete the implementation of DHS’s new human capital system, a large-scale, multiyear change initiative. The specific implementation of a COO/CMO position must be determined within the context of the particular facts, circumstances, challenges and opportunities of each individual agency. As the agency is currently structured, the roles and responsibilities of the Under Secretary for Management contain some of the characteristics of a COO/CMO for the department. According to Section 701 of the Homeland Security Act, the Under Secretary for Management is responsible for the management and administration of the Department in such functional areas as budget, accounting, finance, procurement, human resources and personnel, information technology, and communications systems. In addition, the Under Secretary is responsible for the transition and reorganization process and to ensure an efficient and orderly transfer of functions and personnel to the Department, including the development of a transition plan. While the protection of the homeland is the primary mission of the department, critical to meeting this challenge is the integration of DHS’s varied management processes, systems, and people—in areas such as information technology, financial management, procurement, and human capital—as well as in its administrative services. The integration of these various functions is being executed through DHS’s management integration initiative. The success of this initiative is important since the initiative provides critical support for the total integration of the department, including its operations and programs, to ultimately meet its mission of protecting the homeland. Last week, we released a report on DHS’s management integration efforts to date as compared against selected key practices consistently found to be at the center of successful mergers and transformations. Overall, we found that while DHS has made some progress in its management integration efforts, it has the opportunity to better leverage this progress by implementing a comprehensive and sustained approach to its overall integration efforts. First, key practices show that establishing implementation goals and a timeline is critical to ensuring success and could be contained in an overall integration plan for a merger or transformation. DHS has issued guidance and plans to assist its integration efforts, on a function-by-function basis (information technology and human capital, for example); but it does not have such a comprehensive strategy to guide the management integration departmentwide. Specifically, DHS still does not have a plan that clearly identifies the critical links that must occur across these functions, the necessary timing to make these links occur, how these critical interrelationships will occur, and who will drive and manage them. Second, it is important to dedicate a strong and stable implementation team for the day-to-day management of the transformation, a team vested with the necessary authority and resources to help set priorities, make timely decisions, and move quickly to implement decisions. In addition, this team would ensure that various change initiatives are sequenced and implemented in a coherent and integrated way. DHS is establishing a Business Transformation Office, reporting to the Under Secretary for Management, to help monitor and look for interdependencies among the individual functional integration efforts. However, this office is not currently responsible for leading and managing the coordination and integration that must occur across functions not only to make these individual initiatives work but also to achieve and sustain the overall management integration of DHS. To address this challenge, we recommended, and DHS agreed, that it should develop an overarching management integration strategy and provide it’s recently established Business Transformation Office with the authority and responsibility to serve as a dedicated integration team and also help develop and implement the strategy. Effective strategic planning is another challenge for DHS. We have previously identified strategic planning as one of the critical success factors for new organizations. This is particularly true for DHS, given the breadth of its responsibility and need to clearly identify how stakeholders’ responsibilities and activities align to address homeland security efforts. Without thoughtful and transparent planning that involves key stakeholders, DHS may not be able to implement its programs effectively. In 2004, DHS issued its first departmentwide strategic plan. We have evaluated DHS’s strategic planning process, including the development of its first departmentwide strategic plan, and plan to release a report on our findings within a few weeks. This report will discuss (1) the extent to which DHS’s planning process and associated documents address the required elements of the Government Performance and Results Act of 1993 (GPRA) and reflect good strategic planning practices and (2) the extent to which DHS’s planning documents reflect both its homeland security and nonhomeland security mission responsibilities. Another management challenge faced by DHS is how to manage its human capital. Our work in identifying key practices for implementing successful mergers and transformations indicates that attention to strategic human capital management issues should be at the center of such efforts. DHS has been given significant authority to design a new human capital system free from many of the government’s existing civil service requirements, and has issued final regulations for this new system. We have issued a series of reports on DHS’s efforts to design its human capital system. First, we found that the department’s efforts to design a new human capital system was collaborative and facilitated the participation of employees from all levels of the department, and generally reflected important elements of effective transformations. We recommended that the department maximize opportunities for employees’ involvement throughout the design process and that it place special emphasis on seeking the feedback and buy-in of front line employees in the field. Second, we found that DHS’s human capital management system, as described in the recently released final regulations, includes many principles that are consistent with proven approaches to strategic human capital management. For example, many elements for a modern compensation system—-such as occupational cluster, pay bands, and pay ranges that take into account factors such as labor market conditions—- are to be incorporated into DHS’s new system. However, these final regulations are intended to provide an outline and not a detailed, comprehensive presentation of how the new system will be implemented. Thus, DHS has considerable work ahead to define the details of the implementation of its system, and understanding these details is important to assessing the overall system. DHS faces significant financial management challenges. Specifically, it must address numerous internal control weaknesses, meet the mandates of the DHS Financial Accountability Act, and integrate and modernize its financial management systems, which individually have problems and collectively are not compatible with one another. Overcoming each of these challenges will assist DHS in strengthening its financial management environment, improving the quality of financial information available to manage the department day to day, and obtaining an unqualified opinion on its financial statements. DHS’s independent auditors were unable to issue an opinion on any of the department’s financial statements for fiscal year 2004. This was a substantial setback in DHS’s financial management progress, compounded by continued challenges in resolving its internal control weaknesses. The number of material internal control weaknesses at the department has increased from 7 as of September 30, 2003 to 10 as of September 30, 2004. With the passage of the Department of Homeland Security Financial Accountability Act (the Accountability Act), DHS is now subject to the Chief Financial Officers Act of 1990 (the CFO Act) and the Federal Financial Management Improvement Act of 1996 (FFMIA). The Accountability Act also requires that in fiscal year 2005 the Secretary of Homeland Security include an assertion on internal controls over financial reporting at the department, and in fiscal year 2006 requires an audit of internal controls over financial reporting. We will continue to monitor the steps DHS is taking to meet the requirements of the Accountability Act as part of our audit of the consolidated financial statements of the United States government. We reported in July 2004 that DHS continues to work to reduce the number of financial management service providers and to acquire and deploy an integrated financial enterprise solution. At that time, DHS reported that it had reduced the number of financial management service providers for the department from the 19 providers at the time DHS was formed to 10. DHS planned to consolidate to 7 providers. Additionally, DHS hired a contractor to deploy an integrated financial enterprise solution. This is a costly and time consuming project and we have found that similar projects have proven challenging for other federal agencies. We will therefore continue to monitor DHS’s progress on overcoming this serious challenge. DHS has recognized the need for a strategic management framework that addresses key information technology disciplines, and has made a significant effort to make improvements in each of these disciplines. For example, DHS is implementing its information technology (IT) investment management structure, developing an enterprise architecture, and has begun IT strategic human capital planning. However, much remains to be accomplished before it will have fully established a departmentwide IT management framework. To fully develop and institutionalize the management framework, DHS will need to strengthen strategic planning, develop the enterprise architecture, improve management of systems development and acquisition, and strengthen security. To assist DHS, we have made numerous recommendations, including (1) limiting information technology investments until the department’s strategic management framework is completed and available to effectively guide and constrain the billions of dollars that DHS is spending on such investments; (2) taking appropriate steps to correct any limitations in the Chief Information Officer’s ability to effectively support departmentwide missions; and (3) ensuring the department develops and implements a well-defined enterprise architecture to guide and constrain business transformation and supporting system modernization. The development of this framework is essential to ensuring the proper acquisition and management of key DHS programs such as U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT), Automated Commercial Environment, and Secure Flight. To this end, we have recently reported on key management challenges and weaknesses for each of the programs that an effective DHS-wide framework for managing systems investments would be instrumental in addressing. Our work has indicated that managing acquisitions is also a major management challenge for DHS. The department faces the challenge of structuring its acquisition organization so that its various procurement organizations are held accountable for complying with procurement policies and regulations and ensuring that taxpayer dollars are well-spent. In addition, the department has in place a number of large, complex, and high-cost acquisition programs, such as US-VISIT and the Coast Guard’s Deepwater program, which will need to be closely managed to ensure that they receive the appropriate level of oversight and that acquisition decisions are made based on the right level of information. For example, we reported in March 2004 that the Deepwater program needed to pay more attention to management and contractor oversight in order to avoid cost overruns. We have also reported on contract management problems at the former Immigration and Naturalization Service, now a part of DHS, and TSA. We will issue a report at the end of the this month that addresses (1) areas where DHS has been successful in promoting collaboration among its various organizations, (2) areas where DHS still faces challenges in integrating the acquisition function, and (3) the department’s progress in implementing an effective review process for its major, complex investments. DHS also faces management challenges in coordinating research and development (R&D). Our work has recently found that DHS has not yet completed a strategic plan to identify priorities, goals, objectives, and policies for the R&D of homeland security technologies and that additional challenges remain in its coordination with other federal agencies. Failure to complete a strategic plan and to fully coordinate its research efforts may limit DHS’s ability to leverage resources and could increase the potential for duplication of research. In addition, DHS faces challenges with regard to its use of DOE laboratories. These challenges include the development of a better working relationship through better communication and the development of clear, well-defined criteria for designating the DOE laboratories to receive the majority of DHS’s R&D funding. Moreover, DHS faces the challenge of balancing the immediate needs of the users of homeland security technologies with the need to conduct R&D on advanced technologies for the future. Similarly, conducting R&D on technologies for detecting, preventing, and mitigating terrorist threats is vital to enhancing the security of the nation’s transportation system. In our report on the Transportation Security Administration’s (TSA) and DHS’s transportation security R&D programs, we found that although TSA and DHS have made some efforts to coordinate R&D with each other and with other federal agencies, both their coordination with the Department of Transportation (DOT) and their outreach to the transportation industry have been limited. For example, officials from the modal administrations of DOT, which continue to conduct some transportation security R&D, said they had not provided any input into TSA’s and DHS’s transportation security R&D project selections. Consequently, DOT’s and the transportation industry’s security R&D needs may not be adequately reflected in TSA’s and DHS’s R&D portfolios. Therefore, we recommend that TSA and DHS (1) develop a process with DOT to coordinate transportation security R&D, such as a memorandum of agreement identifying roles and responsibilities and designating agency liaisons and (2) develop a vehicle to communicate with the transportation industry to ensure that its R&D security needs have been identified and considered. DHS generally concurred with our report and its recommendations. Given the dominant role that DHS plays in securing the homeland, it is critical that DHS be able to ensure that its management systems are operating as efficiently and effectively as possible. While it is understood that a transformation of this magnitude takes time and that DHS’s immediate focus has been on its homeland security mission, we see the need for DHS to increase its focus on management issues. This is important not only to DHS itself, but also to the nation’s homeland security efforts, because, in addition to managing its own organization, DHS plays a larger role in managing homeland security and in coordinating with the activities of other federal, state, local, and private stakeholders. This larger DHS role presents its own unique challenges. For example, DHS faces the challenge of clarifying the role of government versus the private sector. In April 2002, we testified that the appropriate roles and responsibilities within and between the levels of governments and with the private sector are evolving and need to be clarified. New threats are prompting a reassessment and shifting of long-standing roles and responsibilities. These shifts have been occurring on a piecemeal and ad hoc basis without the benefit of an overarching framework and criteria to guide the process. As another example, DHS faces a challenge in determining how federal resources are allocated to non-federal stakeholders. We have long advocated a risk management approach to guide the allocation of resources and investments for improving homeland security. Additionally, OMB has identified various tools, such as benefit-cost analysis, it considers useful in planning such as capital budgeting and regulatory decisionmaking. DHS must develop a commonly accepted framework and supporting tools to inform cost allocations in a risk management process. Although OMB asked the public in 2002 for suggestions on how to adjust standard tools to the homeland security setting, a vacuum currently exists in which benefits of homeland security investments are often not quantified and almost never valued in monetary terms. As a final example, DHS faces a challenge in sharing information among all stakeholders. DHS has initiatives underway to enhance information sharing (including the development of a homeland security enterprise architecture to integrate sharing between federal, state, and local authorities). However, our August 2003 report noted that these initiatives, while beneficial for the partners, presented challenges because they (1) were not well coordinated, (2) risked limiting participants’ access to information, and (3) potentially duplicated the efforts of some key agencies at each level of government. We also found that despite various legislation, strategies, and initiatives, federal agencies, states, and cities did not consider the information sharing process to be effective. A well-managed DHS will be needed to meet these larger homeland security challenges. As DHS continues to evolve, integrate its functions, and implement its programs, we will continue to review its progress and provide information to Congress for oversight purposes. Mr. Chairman, this concludes my prepared statement. I will now be pleased to respond to any questions that you or other members of the subcommittee have. For further information about this testimony, please contact Norman J. Rabkin at 202-512-8777. Other key contributors to this statement were Stephen L. Caldwell, Wayne A. Ekblad, Carole J. Cimitile, Ryan T. Coles, Tammy R. Conquest, Benjamin C. Crawford, Heather J. Dunahoo, Kimberly M. Gianopoulos, David B. Goldstein, Randolph C. Hite, Robert G. Homan, Casey L. Keplinger, Eileen R. Larence, Michele Mackin, Lisa R. Shames, and Sarah E. Veale. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) plays a key role in coordinating the nation's homeland security efforts with stakeholders in the federal, state, local, and private sectors. While GAO has conducted numerous reviews of specific DHS missions, such as border and transportation security and emergency preparedness, this testimony addresses overall DHS management issues. This testimony addresses (1) why GAO designated DHS's transformation as a high-risk area; and (2) the specific management challenges facing DHS. GAO designated DHS's transformation as a high-risk area in 2003, based on three factors. First, DHS faced enormous challenges in implementing an effective transformation process, developing partnerships, and building management capacity because it had to transform 22 agencies into one department. Second, DHS faced a broad array of operational and management challenges that it inherited from its component legacy agencies. Finally, DHS's failure to effectively address its management challenges and program risks could have serious consequences for our national security. Overall, DHS has made some progress, but significant management challenges remain to transform DHS into a more efficient organization while maintaining and improving its effectiveness in securing the homeland. Therefore, DHS's transformation remains a high-risk area. DHS faces a number of management challenges to improve its ability to carry out its homeland security missions. Among these challenges are providing focus for management efforts; monitoring transformation and integration; improving strategic planning; managing human capital; strengthening financial management infrastructure; establishing an information technology management framework; managing acquisitions; and coordinating research and development.
OPS, within the Department of Transportation’s Research and Special Programs Administration (RSPA), administers the national regulatory program to ensure the safe transportation of natural gas and hazardous liquids by pipeline. The office attempts to ensure the safe operation of pipelines through regulation, national consensus standards, research, education (e.g., to prevent excavation-related damage), oversight of the industry through inspections, and enforcement when safety problems are found. The office uses a variety of enforcement tools, such as compliance orders and corrective action orders that require pipeline operators to correct safety violations, notices of amendment to remedy deficiencies in operators’ procedures, administrative actions to address minor safety problems, and civil penalties. OPS is a small federal agency. In fiscal year 2003, OPS employed about 150 people, about half of whom were pipeline inspectors. Before imposing a civil penalty on a pipeline operator, OPS issues a notice of probable violation that documents the alleged violation and a notice of proposed penalty that identifies the proposed civil penalty amount. Failure by an operator to inspect the pipeline for leaks or unsafe conditions is an example of a violation that may lead to a civil penalty. OPS then allows the operator to present evidence either in writing or at an informal hearing. Attorneys from RSPA’s Office of Chief Counsel preside over these hearings. Following the operator’s presentation, the civil penalty may be reaffirmed, reduced, or withdrawn. If the hearing officer determines that a violation did occur, the Office of Chief Counsel issues a final order that requires the operator to correct the safety violation (if a correction is needed) and pay the penalty (called the “assessed penalty”). The operator has 20 days after the final order is issued to pay the penalty. The Federal Aviation Administration (FAA) collects civil penalties for OPS. From 1992 through 2002, federal law allowed OPS to assess up to $25,000 for each day a violation continued, not to exceed $500,000 for any related series of violations. In December 2002, the Pipeline Safety Improvement Act increased these amounts to $100,000 and $1 million, respectively. The effectiveness of OPS’s enforcement strategy cannot be determined because OPS has not incorporated three key elements of effective program management—clear performance goals for the enforcement program, a fully defined strategy for achieving these goals, and performance measures linked to goals that would allow an assessment of the enforcement strategy’s impact on pipeline safety. OPS’s enforcement strategy has undergone significant changes in the last 5 years. Before 2000, the agency emphasized partnering with the pipeline industry to improve pipeline safety rather than punishing noncompliance. In 2000, in response to concerns that its enforcement was weak and ineffective, the agency decided to institute a “tough but fair” enforcement approach and to make greater use of all its enforcement tools, including larger and more frequent civil penalties. In 2001, to further strengthen its enforcement, OPS began issuing more corrective action orders requiring operators to address safety problems that led or could lead to pipeline accidents. In 2002, OPS created a new Enforcement Office to focus more on enforcement and help ensure consistency in enforcement decisions. However, this new office is not yet fully staffed, and key positions remain vacant. In 2002, OPS began to enforce its new integrity management and operator qualification standards in addition to its minimum safety standards. Initially, while operators were gaining experience with the new, complex integrity management standards, OPS primarily used notices of amendment, which require improvements in procedures, rather than stronger enforcement actions. Now that operators have this experience, OPS has begun to make greater use of civil penalties in enforcing these standards. OPS has also recently begun to reengineer its enforcement program. Efforts are under way to develop a new enforcement policy and guidelines, develop a streamlined process for handling enforcement cases, modernize and integrate the agency’s inspection and enforcement databases, and hire additional enforcement staff. However, as I will now discuss, OPS has not put in place key elements of effective management that would allow it to determine the impact of its evolving enforcement program on pipeline safety. Although OPS has overall performance goals, it has not established specific goals for its enforcement program. According to OPS officials, the agency’s enforcement program is designed to help achieve the agency’s overall performance goals of (1) reducing the number of pipeline accidents by 5 percent annually and (2) reducing the amount of hazardous liquid spills by 6 percent annually. Other agency efforts—including the development of a risk-based approach to finding and addressing significant threats to pipeline safety and of education to prevent excavation-related damage to pipelines—are also designed to help achieve these goals. OPS’s overall performance goals are useful because they identify the end outcomes, or ultimate results, that OPS seeks to achieve through all its efforts. However, OPS has not established performance goals that identify the intermediate outcomes, or direct results, that OPS seeks to achieve through its enforcement program. Intermediate outcomes show progress toward achieving end outcomes. For example, enforcement actions can result in improvements in pipeline operators’ safety performance—an intermediate outcome that can then result in the end outcome of fewer pipeline accidents and spills. OPS is considering establishing a goal to reduce the time it takes the agency to issue final enforcement actions. While such a goal could help OPS improve the management of the enforcement program, it does not reflect the various intermediate outcomes the agency hopes to achieve through enforcement. Without clear goals for the enforcement program that specify intended intermediate outcomes, agency staff and external stakeholders may not be aware of what direct results OPS is seeking to achieve or how enforcement efforts contribute to pipeline safety. OPS has not fully defined its strategy for using enforcement to achieve its overall performance goals. According to OPS officials, the agency’s increased use of civil penalties and corrective action orders reflects a major change in its enforcement strategy. However, although OPS began to implement these changes in 2000, it has not yet developed a policy that defines this new, more aggressive enforcement strategy or describes how it will contribute to the achievement of its performance goals. In addition, OPS does not have up-to-date, detailed internal guidelines on the use of its enforcement tools that reflect its current strategy. Furthermore, although OPS began enforcing its integrity management standards in 2002 and received greater enforcement authority under the 2002 pipeline safety act, it does not yet have guidelines in place for enforcing these standards or implementing the new authority provided by the act. According to agency officials, OPS management communicates enforcement priorities and ensures consistency in enforcement decisions through frequent internal meetings and detailed inspection protocols and guidance. Agency officials recognize the need to develop an enforcement policy and up-to-date detailed enforcement guidelines and have been working to do so. To date, the agency has completed an initial set of enforcement guidelines for its operator qualification standards and has developed other draft guidelines. However, because of the complexity of the task, agency officials do not expect that the new enforcement policy and remaining guidelines will be finalized until sometime in 2005. The development of an enforcement policy and guidelines should help define OPS’s enforcement strategy; however, it is not clear whether this effort will link OPS’s enforcement strategy with intermediate outcomes, since agency officials have not established performance goals specifically for their enforcement efforts. We have reported that such a link is important. According to OPS officials, the agency currently uses three performance measures and is considering three additional measures to determine the effectiveness of its enforcement activities and other oversight efforts. (See table 1.) The three current measures provide useful information about the agency’s overall efforts to improve pipeline safety, but do not clearly indicate the effectiveness of OPS’s enforcement strategy because they do not measure the intermediate outcomes of enforcement actions that can contribute to pipeline safety, such as improved compliance. The three measures that OPS is considering could provide more information on the intermediate outcomes of the agency’s enforcement strategy, such as the frequency of repeat violations and the number of repairs made in response to corrective action orders, as well as other aspects of program performance, such as the timeliness of enforcement actions. We have found that agencies that are successful in measuring performance strive to establish measures that demonstrate results, address important aspects of program performance, and provide useful information for decision-making. While OPS’s new measures may produce better information on the performance of its enforcement program than is currently available, OPS has not adopted key practices for achieving these characteristics of successful performance measurement systems: Measures should demonstrate results (outcomes) that are directly linked to program goals. Measures of program results can be used to hold agencies accountable for the performance of their programs and can facilitate congressional oversight. If OPS does not set clear goals that identify the desired results (intermediate outcomes) of enforcement, it may not choose the most appropriate performance measures. OPS officials acknowledge the importance of developing such goals and related measures but emphasize that the diversity of pipeline operations and the complexity of OPS’s regulations make this a challenging task. Measures should address important aspects of program performance and take priorities into account. An agency official told us that a key factor in choosing final measures would be the availability of supporting data. However, the most essential measures may require the development of new data. For example, OPS has developed databases that will track the status of safety issues identified in integrity management and operator qualification inspections, but it cannot centrally track the status of safety issues identified in enforcing its minimum safety standards. Agency officials told us that they are considering how to add this capability as part of an effort to modernize and integrate their inspection and enforcement databases. Measures should provide useful information for decision-making, including adjusting policies and priorities. OPS uses its current measures of enforcement performance in a number of ways, including monitoring pipeline operators’ safety performance and planning inspections. While these uses are important, they are of limited help to OPS in making decisions about its enforcement strategy. OPS has acknowledged that it has not used performance measurement information in making decisions about its enforcement strategy. OPS has made progress in this area by identifying possible new measures of enforcement results (outcomes) and other aspects of program performance, such as indicators of the timeliness of enforcement actions, that may prove more useful for managing the enforcement program. In 2000, in response to criticism that its enforcement activities were weak and ineffective, OPS increased both the number and the size of the civil monetary penalties it assessed. Pipeline safety stakeholders expressed differing opinions about whether OPS’s civil penalties are effective in deterring noncompliance with pipeline safety regulations. OPS assessed more civil penalties during the past 4 years under its current “tough but fair” enforcement approach than it did in the previous 5 years, when it took a more lenient enforcement approach. (See fig. 2.) From 2000 through 2003, OPS assessed 88 civil penalties (22 per year on average) compared with 70 civil penalties from 1995 through 1999 (about 14 per year on average). For the first 5 months of 2004, OPS proposed 38 civil penalties. While the recent increase in the number and the size of civil penalties may reflect OPS’s new “tough but fair” enforcement approach, other factors, such as more severe violations, may be contributing to the increase as well. Overall, OPS does not use civil penalties extensively. Civil penalties represent about 14 percent (216 out of 1,530) of all enforcement actions taken over the past 10 years. OPS makes more extensive use of other types of enforcement actions that require pipeline operators to fix unsafe conditions and improve inadequate procedures, among other things. In contrast, civil penalties represent monetary sanctions for violating safety regulations but do not require safety improvements. OPS may increase its use of civil penalties as it begins to use them to a greater degree for violations of its integrity management standards. The average size of the civil penalties has increased. For example, from 1995 through 1999, the average assessed civil penalty was about $18,000. From 2000 through 2003, the average assessed civil penalty increased by 62 percent to about $29,000. Assessed penalty amounts ranged from $500 to $400,000. In some instances, OPS reduces proposed civil penalties when it issues its final order. We found that penalties were reduced 31 percent of the time during the 10-year period covered by our work (66 of 216 instances). These penalties were reduced by about 37 percent (from a total of $2.8 million to $1.7 million). The dollar difference between the proposed and the assessed penalties would be over three times as large had our analysis included the extraordinarily large penalty for the Bellingham, Washington, incident. For this case, OPS proposed a $3.05 million penalty and had assessed $250,000 as of May 2004. If we include this penalty, then over this period OPS reduced total proposed penalties by about two-thirds, from about $5.8 million to about $2 million. OPS’s database does not provide summary information on why penalties are reduced. According to an OPS official, the agency reduces penalties when an operator presents evidence that the OPS inspector’s finding is weak or wrong or when the pipeline’s ownership changes during the period between the proposed and assessed penalty. It was not practical for us to gather information on a large number of penalties that were reduced, but we did review several to determine the reasons for the reductions. OPS reduced one of the civil penalties we reviewed because the operator provided evidence that OPS inspectors had miscounted the number of pipeline valves that OPS said the operator had not inspected. Since the violation was not as severe as OPS had stated, OPS reduced the proposed penalty from $177,000 to $67,000. Of the 216 penalties that OPS assessed from 1994 through 2003, pipeline operators paid the full amount 93 percent of the time (200 instances) and reduced amounts 1 percent of the time (2 instances). (See fig. 3.) Fourteen penalties (6 percent) remain unpaid, totaling about $837,000 (or 18 percent of penalty amounts). In two instances, operators paid reduced amounts. We followed up on one of these assessed penalties. In this case, the operator requested that OPS reconsider the assessed civil penalty and OPS reduced it from $5,000 to $3,000 because the operator had a history of cooperation and OPS wanted to encourage future cooperation. For the 14 unpaid penalties, neither FAA’s nor OPS’s data show why the penalties have not been collected. We expect to present a fuller discussion of the reasons for these unpaid penalties and OPS’s and FAA’s management controls over the collection of penalties when we report to this and other committees next month. Although OPS has increased both the number and the size of the civil penalties it has imposed, the effect of this change on deterring noncompliance with safety regulations, if any, is not clear. The stakeholders we spoke with expressed differing views on whether the civil penalties deter noncompliance. The pipeline industry officials we contacted believed that, to a certain extent, OPS’s civil penalties encourage pipeline operators to comply with pipeline safety regulations because they view all of OPS’s enforcement actions as deterrents to noncompliance. However, some industry officials said that OPS’s enforcement actions are not their primary motivation for safety. Instead, they said that pipeline operators are motivated to operate safely because they need to avoid any type of accident, incident, or OPS enforcement action that impedes the flow of products through the pipeline and hinders their ability to provide good service to their customers. Pipeline industry officials also said that they want to operate safely and avoid pipeline accidents because accidents generate negative publicity and may result in costly private litigation against the operator. Most of the interstate agents, representatives of their associations, and insurance company officials expressed views similar to those of the pipeline industry officials, saying that they believe civil penalties deter operators’ noncompliance with regulations to a certain extent. However, a few disagreed with this point of view. For example, the state agency representatives and a local government official said that OPS’s civil penalties are too small to be deterrents. Pipeline safety advocacy groups that we talked to also said that the civil penalty amounts OPS imposes are too small to have any deterrent effect on pipeline operators. As discussed earlier, for 2000 through 2003, the average assessed penalty was about $29,000. According to economic literature on deterrence, pipeline operators may be deterred if they expect a sanction, such as a civil penalty, to exceed any benefits of noncompliance. Such benefits could, in some cases, be lower operating costs. The literature also recognizes that the negative consequences of noncompliance—such as those stemming from lawsuits, bad publicity, and the value of the product lost from accidents—can deter noncompliance along with regulatory agency oversight. Thus, for example, the expected costs of a legal settlement could overshadow the lower operating costs expected from noncompliance, and noncompliance might be deterred. Mr. Chairman, this concludes my prepared statement. We expect to report more fully on these and other issues when we complete our work next month. We also anticipate making recommendations to improve OPS’s ability to demonstrate the effectiveness of its enforcement strategy and to improve OPS’s and FAA’s management controls over the collection of civil penalties. I would be pleased to respond to any questions that you or Members of the Committee might have. For information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov. Individuals making key contributions to this testimony are Jennifer Clayborne, Judy Guilliams- Tapia, Bonnie Pignatiello Leer, Gail Marnik, James Ratzenberger, and Gregory Wilmoth. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Interstate pipelines carrying natural gas and hazardous liquids (such as petroleum products) are safer to the public than other modes of freight transportation. The Office of Pipeline Safety (OPS), the federal agency that administers the national regulatory program to ensure safe pipeline transportation, has been undertaking a broad range of activities to make pipeline transportation safer. However, the number of serious accidents--those involving deaths, injuries, and property damage of $50,000 or more--has not fallen. Among other things, OPS takes enforcement action against pipeline operators when safety problems are found. OPS has several enforcement tools to require the correction of safety violations. It can also assess monetary sanctions (civil penalties). This testimony is based on ongoing work for the Senate Committee on Commerce, Science and Transportation and for other committees, as required by the Pipeline Safety Improvement Act of 2002. The testimony provides preliminary results on (1) the effectiveness of OPS's enforcement strategy and (2) OPS's assessment of civil penalties. The effectiveness of OPS's enforcement strategy cannot be determined because the agency has not incorporated three key elements of effective program management--clear program goals, a well-defined strategy for achieving goals, and performance measures that are linked to program goals. Without these key elements, the agency cannot determine whether recent and planned changes in its strategy will have the desired effects on pipeline safety. Over the past several years, OPS has focused on other efforts--such as developing a new risk-based regulatory approach--that it believes will change the safety culture of the industry. While OPS has become more aggressive in enforcing its regulations, it now intends to further strengthen the management of its enforcement program. In particular, OPS is developing an enforcement policy that will help define its enforcement strategy and has taken initial steps toward identifying new performance measures. However, OPS does not plan to finalize the policy until 2005 and has not adopted key practices for achieving successful performance measurement systems, such as linking measures to goals. OPS increased both the number and the size of the civil penalties it assessed against pipeline operators over the last 4 years (2000-2003) following its decision to be "tough but fair" in assessing penalties. OPS assessed an average of 22 penalties per year during this period, compared with an average of 14 per year for the previous 5 years (1995-1999), a period of more lenient "partnering" with industry. In addition, the average penalty increased from $18,000 to $29,000 over the two periods. About 94 percent of the 216 penalties levied from 1994 through 2003 have been paid. The civil penalty is one of several actions OPS can take when it finds a violation, and these penalties represent about 14 percent of all enforcement actions over the past 10 years. While OPS has increased the number and size of civil penalties, stakeholders--including industry, state, and insurance company officials and public advocacy groups--expressed differing views on whether these penalties deter noncompliance with safety regulations. Some, such as pipeline operators, thought that any penalty was a deterrent if it kept the pipeline operator in the public eye, while others, such as safety advocates, told us that the penalties were too small to be effective sanctions.
Investigations and arrests are core functions of all federal law enforcement agencies. According to BJS, as of June 2000 (the latest available information), 69 federal agencies employed 88,000 full-time law enforcement officers. Of these, Justice employed more than half (58 percent) and the Department of the Treasury employed 21 percent. During fiscal year 2001, according to BJS, these two agencies accounted for the highest percentage of arrests for federal offenses, with 71 percent made by Justice components and 12 percent by Treasury components. In addition, state and local agencies made 4 percent of the arrests, and 7 percent of the arrests were made after the suspects voluntarily reported to the Marshals Service following a summons. The remaining 6 percent of the arrests were made by other agencies or were undesignated in the statistics. Suspects arrested by federal agencies for federal crimes are transferred to the custody of the Marshals Service for processing, transportation,and detention. According to BJS, in 2001 the Marshals Service received 118,896 suspects from federal law enforcement agencies, including those the Marshals arrested themselves. Of these arrests, 28 percent were for drug offenses; 21 percent for immigration offenses, 16 percent for supervision violations; 14 percent for property offenses (such as embezzlement, forgery, burglary, and motor vehicle theft); 8 percent were for public-order offenses; 5 percent for weapons offenses; 4 percent for violent offenses; and 3 percent to secure and safeguard a material witness. Figure 1 shows the number of suspects arrested only for federal offenses for the agencies we reviewed. Some suspects arrested by federal agents are transferred to state and local jurisdictions for prosecution for nonfederal crimes. For example, according to BJS, in fiscal year 2001 DEA arrested 11,778 suspects for federal offenses who were booked by the Marshals Service. However, DEA’s statistical reporting system recorded over 35,000 arrests that same year; the additional suspects were turned over to state and local authorities and were not booked through the Marshals Service, according to BJS. Similarly, BJS reported that USPIS arrested 1,226 suspects for federal offenses who were booked by the Marshals Service in fiscal year 2001, while USPIS told us that they had made 4,698 federal arrests that same year. The guidance and procedures for counting investigations, or “cases” as they are sometimes referred to, are generally consistent among the six agencies we reviewed. Agencies pursue investigations into crimes that have a nexus to their respective missions, such as drug trafficking for DEA, mail theft for USPIS, and illegal aliens for INS. Generally, according to their guidance and procedures, agencies open cases that result from tips or leads received from confidential informants or other sources, they may be invited to help in other agencies’ cases, or they may participate in task force investigations. Once the agents have made the decision to open a case, the cases are to be reviewed and approved by a supervisor, and details of the case are then entered into the agencies’ case management tracking systems and counted. We also found agency guidance and procedures for counting arrests to be generally consistent among all six agencies. That is, agents must be directly involved in the arrest, assist other law enforcement personnel in making the arrest, or provide information that leads to an arrest. For example, according to DEA’s Agents Manual, agents are to count drug- related arrests only when DEA is directly involved in the arrest. Similarly, USPIS inspectors are to count arrests when physically present or providing assistance. According to USPIS’s Inspection Service Manual, inspectors are to count an arrest when an inspector participated personally in making an arrest or contributed significantly to an investigation resulting in an arrest made by another law enforcement agency; an inspector’s investigative efforts with another law enforcement agency motivate and materially contribute to the identity and arrest of a person for a postal crime event though the inspector was not present at the time; or an inspector’s investigation of a postal offense develops additional, significant evidence that is brought to the prosecutor’s attention that leads to prosecution for an additional offense. The determination of material contribution, as used in the Inspection Service Manual, is left to the judgment of a supervisor. For example, according to USPIS officials, if a postal inspector alerted the highway patrol to an individual suspected of mail theft, or if the inspector was conducting an ongoing investigation on the suspect and the highway patrol made the arrest, the arrest would be claimed by USPIS even though the postal inspectors did not make the physical arrest. The other agencies in our review used similar criteria for counting arrests. In addition, the agencies required supervisory review of the justifications for the arrests before they were entered into the agencies’ data tracking systems and officially counted. In addition to their guidance and procedures for counting investigations and arrests, three of the agencies in our review—DEA, FBI, and USPIS— have an inspection process to, among other things, review the appropriateness of investigations and arrests that are made. DEA, for example, told us that its Inspections Division periodically validates a sample of arrests and screens them for any type of questionable activity, such as “piggy backing” arrests. According to a DEA official, piggy backing is when state and local law enforcement agencies perform the investigative work and a DEA agent goes along for the arrest, writes it up, and claims credit for the arrest. The official said that the Inspections Division has consistently found a very low percentage of questionable arrests, but that a database of questionable arrests has not been accumulated. The official gave one example from DEA’s New York Field Office, where 2 or 3 questionable arrests out of over 8,000 were found. The official indicated that questionable arrests are mostly isolated incidents and are not part of any systemic problems. The official concluded that if questionable arrests were found, those arrest statistics would be removed from the agency database. FBI and USPIS officials said that no questionable arrests were found during their reviews. Agency officials with whom we spoke told us that investigation and arrest statistics are used for many purposes, depending on the circumstances. In general, the officials said that statistics serve as indicators of agency work and as output measures in performance plans, budget justifications, testimonies, and for some agencies, are considered in making promotion, bonus, and award determinations. Officials at the agencies we reviewed said that investigation and arrest statistics are not emphasized in any of these activities, but are one of many factors that are considered when reporting agency results or when making personnel decisions. We found that agencies generally reported investigation and arrest statistics in their budget justifications, congressional testimonies, and/or other public documents. These statistics, however, were not the only criteria used as indicators of agency workload and productivity. We reviewed FBI budget requests for fiscal years 2003 and 2004, for example, and found numbers of investigations and arrests listed in the documents, as well as numbers of indictments and convictions. DEA’s budget requests to Congress also included investigation and arrest statistics. For example, in its fiscal year 2003 budget request, DEA reported on an operation that resulted in 38 arrests, and a table in DEA’s fiscal year 2004 budget request entitled “Domestic Enforcement” showed the number of national/local investigations, investigations completed, and total investigations. Conversely, INS did not cite investigation and arrest statistics in its budget justification documents. Investigation and arrest activities were discussed, however, in the very broadest terms. For example, in its 2003 budget justification documentation, INS indicated that it would initiate high- priority investigations, conduct asset seizures, and present individuals for prosecution for alien smuggling-related violations to disrupt the means and methods that facilitate alien smuggling. And, in its 2002 documentation, INS noted that, as a result of its efforts, many alien smugglers, fraud organizations, and facilitators were arrested and presented for prosecution; assets were seized; and aliens with a nexus to organized crime, violent gangs, drug trafficking gangs, or who have terrorist-related affiliations, were apprehended. Concerning inclusion of arrest statistics in congressional testimony, DEA’s Administrator’s testimonies to Congress on the agency’s budget requests often contained references to successful cases that resulted in arrests. For example, in the fiscal year 2003 budget request, the Administrator said that an operation resulted in 14 arrests. The Administrator’s testimony for fiscal year 2004 also included similar examples, such as DEA disrupting 30 drug trafficking organizations and dismantling 15 others. In addition, in congressional testimony on the fiscal year 2002 budget request, the FBI Director said that overall, during fiscal year 2000, FBI investigations contributed to the indictment of over 19,000 individuals, the conviction of over 21,000 individuals, and the arrest of more than 36,000 individuals. As an example of using investigation statistics in public documents, on September 2, 2003, DEA listed on its Web site 37 major operations that it had been involved in from 1992 to 2003. Many of these listings detailed major investigations involving joint operations with other federal, state, and local law enforcement agencies that resulted in disruptions and dismantlement of narcotics trafficking operations and in numerous arrests. In addition, many of the listings gave credit to the other participating agencies for their work on the same cases. Agency officials said, however, that the investigation and arrest statistics are only one of many factors used as indicators of agency workload and productivity and are not emphasized in reporting results of agencies’ workload performance. For example, DEA officials told us that instead of pursuing numbers of investigations and arrests, their focus is on targeting, disrupting, and dismantling major drug trafficking organizations; working cooperatively and closely with other federal, state, and local law making an impact on reducing the flow of narcotics and dangerous drugs into the United States. Agency officials with whom we spoke told us that investigation and arrest statistics are used as measures of productivity and indicators of workload activity, but only to a limited extent in personnel management activities such as promotions, bonuses, and awards. USPIS officials, for example, said that investigation and arrest statistics are only one of many indicators of an individual’s performance and are not required for making promotion and other personnel decisions. The knowledge, skills, and abilities for promotion do not list “number of arrests” as a competency. For example criteria for promotion to the manager level, GS-15, is based on competencies including customer focus, interpersonal skills, problem identification and analysis/decision making, strategic leadership, and oral and written communication. USPIS officials said that awards and bonuses are usually given for performances above and beyond normal expectations, not just for making arrests. An inspector or team, for example, that makes a large number of arrests as culmination of an investigation could receive an award, according to the officials. The other agencies we reviewed generally followed similar criteria on the use of investigation and arrest statistics in performance management decisions as that described earlier by the USPIS officials. For example, INS’s promotion criteria considered such factors as job experience, decision-making, managerial writing, and job simulation. All of the agencies we reviewed counted the same investigations and arrests when more than one of them participated in the investigative and arresting activities. In two of the three reviews of joint investigations that we performed, agencies reported that they each counted some of the same arrestees involved in the investigations. (See apps. VIII and IX.) Agency officials told us that they believe that this practice is appropriate, because in their opinion, many investigations and arrests would not have occurred without the involvement and cooperation of all the agencies that participated. If agencies were not allowed to count investigations and arrests in which they participated, the officials said that agencies would be less likely to work together, cases would be much smaller, and the desired disruption of high-level criminal organizations would be hampered. In general, the agency press releases and Web sites we reviewed gave credit to one another when they jointly participated in major investigations that resulted in a number of arrests. We found several examples of this practice, but did not find any overall federal database that would identify joint investigations and arrests that were conducted by multiple federal law enforcement agencies. Several of the agencies’ internal databases, however, are capable of identifying joint investigations and arrests, while others could possibly be so modified, according to agency officials. For example, DEA’s statistical database was able to identify arrests made by DEA unilaterally, as well as those made jointly with other federal, state, and local law enforcement agencies, as shown in figure 3. Federal law enforcement agencies are in business to enforce the nation’s law and regulations, investigate the activities of criminal organizations, and arrest individuals suspected of criminal activity. Increasingly, federal law enforcement agencies do not pursue these activities in a vacuum. All involved agencies do count the same investigations and arrests resulting from joint operations, present these statistics in their public documents and budget justifications, and consider their actions justified. There is no central federal repository of joint investigations and arrests conducted by the agencies we reviewed. Moreover, not all of the agencies currently distinguish between unilateral and joint arrests and investigations within their databases. Making this distinction may help guide Congress when making budget decisions about these agencies. Also, the agencies can provide, or if instructed, modify their databases to reflect more refined information. However, we did not evaluate what cost, if any, would be associated with requiring agencies to do so. We provided a draft copy of this report to Justice, DHS, and USPIS. Justice and DHS indicated that they had no further comments on our draft; however, technical clarifications were provided during our exit meetings. USPIS agreed with our report’s overall finding that federal law enforcement agencies are generally consistent in the way they report and make use of investigation and arrest statistics. However, USPIS provided technical comments which we have incorporated, as appropriate. USPIS’s written comments are reproduced in appendix XI. We are providing copies of this report to the Attorney General, the Secretary of Homeland Security, and the Postmaster General. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact Darryl W. Dutton at (213) 830-1086 or me at (202) 512-8777. Key contributors to this report are listed in appendix XII. Overall, to address our objectives, we selected six federal agencies that perform investigations and arrest suspects. The agencies selected were the Drug Enforcement Administration (DEA), the Federal Bureau of Investigation (FBI), and the U.S. Marshals Service from the Department of Justice (Justice); the former U.S. Customs Service (Customs) and Immigration and Naturalization Service (INS), now part of the Department of Homeland Security (DHS); and the U.S. Postal Inspection Service (USPIS). We selected DEA, FBI, Customs, INS, and the Marshals Service because these agencies reported the highest number of federal arrests, according to the Bureau of Justice Statistics (BJS) Compendium of Federal Justice Statistics, 2000, the most recent data available at the time of our selection. We selected USPIS because it reported the highest number of federal arrests from non-Justice or non-Treasury agencies. In addition, our review focused on fiscal years 1998 through 2001 as mandated by the 21st Century Department of Justice Appropriations Act. Our review focused on agencies’ policies and procedures used to count investigations and arrests, not on the number of investigations conducted and arrests made. Therefore, we did not perform reliability assessments of data systems at the selected agencies. To identify the guidance and procedures followed by federal law enforcement agencies regarding counting and reporting investigation and arrest statistics, we reviewed agency mission statements, policies, and applicable manuals concerning investigations and arrests. We also obtained information about agency investigation and arrest statistical tracking systems –for example, DEA’s Case Status Subsystem; FBI’s Integrated Statistical Reporting and Analysis database; and USPIS’s Inspection Service Database Information System. We also obtained (1) overall statistics of investigations and arrests by federal law enforcement agencies compiled by BJS and used in its Compendium of Federal Justice Statistics, for fiscal years 1998, 1999, 2000, and 2001, the latest Compendium available at the time of our review, and (2) selected Office of Inspector General and internal agency inspection reports concerning the use of investigation and arrest statistics. We also interviewed officials from each agency who were responsible for reporting, compiling, analyzing, and disseminating investigation and arrest statistics. To determine how investigation and arrest statistics are used, we reviewed selected agency budget justifications that were submitted to the Congress, congressional testimonies used to justify congressional appropriations, and internal agency manuals and policies for use of investigation and arrest statistics. We also reviewed guidance on issues such as promotion, bonus, and award criteria for agents and interviewed officials who used investigation and arrest statistics in their administrative and management systems. Our review was performed primarily at the agencies’ headquarters office in Washington, D.C. However, to obtain the perspective of field staff regarding the use of investigation and arrest statistics for administrative and management purposes, we spoke with key DEA, USPIS, Customs, and Marshals Service staff at their Los Angeles offices. To determine if multiple agencies are reporting the same investigations and arrests, we obtained, when available, information from agency statistical systems – such as DEA’s Defendant Statistical System. We wanted to know whether (1) other law enforcement agencies were involved in investigations or arrests as part of joint investigations, and (2) the individual agencies could be distinguished from each other. We searched for and obtained from congressional testimony and agencies’ Web sites examples of major investigations involving more than one agency and analyzed selected agency budget justifications (e.g., FBI and INS) and performance reports to determine how investigation and arrest statistics were reported to the Congress. We also interviewed agency officials and obtained documents to explain the reasons for either counting or not counting investigations and arrests when other federal, state, or local law enforcement agencies were involved in the investigations. We also conducted assessments of three joint investigations to determine the extent to which agencies were or were not counting the same arrests: a drug trafficking investigation, a child pornography investigation, and a counterterrorism investigation. For the drug trafficking investigation, we searched our selected agencies’ Web sites, where available, for joint operations and found Operation Marquis on DEA’s Web site. Operation Marquis was a DEA-led investigation that involved the FBI and several other federal law enforcement agencies. We subsequently asked DEA and FBI for lists of arrestees from Operation Marquis and matched them to determine whether both agencies were counting the same arrestees. (See app. VIII for additional information on Operation Marquis.) For the child pornography case, we asked Customs whether it had a joint investigation that included one of the other federal agencies among our selected agencies. Customs recommended that we use Operation Bayou Blaster, which also involved USPIS. Again, we asked both Customs and USPIS to provide us with lists of arrestees associated with Operation Bayou Blaster. Customs provided us with a list, but USPIS was unable to generate a list. Consequently, we asked USPIS to crosscheck its database to the list of arrestees provided by Customs. (See app. IX for additional information on the child pornography investigation.) For the counterterrorism joint investigation, we discussed a Joint Terrorism Task Force operation with FBI and INS officials. The FBI and INS provided us with names of arrestees associated with the operation and included in agency arrest statistics for comparison purposes. (See app. X for additional information on the Counterterrorism Joint Task Force investigation.) We conducted our work at selected agency headquarters in Washington D.C., and at DEA, USPIS, Customs, and Marshals Service offices in Los Angeles, California. The Drug Enforcement Administration’s (DEA) mission is to, among other things, enforce the controlled substances laws and regulations of the United investigate and prepare for the prosecution of major violators of controlled substance laws operating at interstate and international levels. To perform its mission, in fiscal year 2003, DEA had a total of 9,629 employees including 4,680 special agents operating in 225 offices in the United States and in 80 other offices throughout the world. DEA’s budget was $1.5 billion in fiscal year 2003. A DEA investigation is referred to as a “case,” and involves targeting organizations or businesses suspected of illegal narcotics trafficking. Any given case can include one or multiple organizations or individuals, but it is counted only as one case. DEA’s Case Status Subsystem (CAST) is the system used to track cases. CAST identifies, among other things, the target (e.g., criminal organization) of the case and whether other agencies are involved. When an agent has sufficient cause to open a case, he or she enters general information about the case into CAST, such as the file number, agent’s name, entity under investigation, date opened, and identification number. DEA pursues investigations into drug trafficking organizations in several ways. DEA special agents may open cases that result from tips or leads received from confidential informants or other sources, may be invited to help in other agencies’ cases, and may participate in interagency task force investigations. When DEA agents initiate their own cases, they may also elicit help from other federal, state, and local law enforcement agencies. Agents’ cases are approved by a supervisor and are entered into CAST. DEA officials told us that their emphasis, instead of pursuing numbers of investigations and arrests, is on targeting, disrupting, and dismantling major drug trafficking organizations; working cooperatively and closely with other federal, state, and local law making an impact on reducing the flow of narcotics and dangerous drugs into the United States. On September 2, 2003, DEA listed on its Web site 37 major operations that it had been involved in from 1992 to 2003. Many of these listings detailed major investigations involving joint operations with other federal, state, and local law enforcement agencies that resulted in disruptions and dismantlement of narcotics trafficking operations and in numerous arrests. In addition, many of the listings gave credit to the other participating agencies for their work on the cases. DEA provided us with the numbers of cases closed between fiscal years 1998 and 2001, including DEA unilateral cases and those performed jointly with other law enforcement agencies as shown in figure 2. According to DEA’s Agents Manual, agents are to claim (i.e., count) drug- related arrests only when DEA is directly involved in the arrest. DEA’s process for counting and recording arrests also includes having a supervisory agent review and sign each Form 202, Personal History Report, used to document personal information on each person arrested. By signing the form, the DEA supervisor attests that DEA directly participated in the arrest or, in the case of a foreign arrest, provided substantial assistance. An Associate Deputy Chief, Office of Inspections, said that DEA’s decision about whether to count an arrest is contingent on several factors, including a clear nexus to a drug offense, involvement of a DEA informant or DEA monies, or the physical presence of DEA agents at the time of the arrest, and/or a significant role by DEA agents. DEA’s Defendant Statistical System (DSS) tracks the number of arrests counted by DEA. Each Form 202—there is one Form 202 for each person arrested—is completed and the information entered into the DSS. An arrest can only be entered in DSS once. When the Form 202 information is entered into DSS, duplicate arrests, if any, will be identified and appropriate divisions will be notified to fix the problems. In addition, DEA has a manual system, the Drug Enforcement Arrest Log, which is used as a check against arrests entered into DSS. As with investigation statistics, DEA was able to provide us with numbers of unilateral and joint arrests by fiscal year from 1998 to 2001. Figure 3 shows numbers of arrests and agencies that were involved in the arrests but does not identify the agency(s) making the actual physical arrest. According to an Associate Deputy Chief, Office of Inspections, DEA conducts on-site inspections about every 3 years in its domestic offices. As part of the inspection, the division or office is assessed to determine if it is successfully achieving DEA’s objectives and priorities, including a validation of claimed arrests. Field Management Plans (FMP) are used in the inspections process. FMPs describe the priorities set by the Special Agent in Charge (SAC) of a field division to counter the field office’s unique drug threats and delineate the methods for accomplishing the agency’s mission and priorities. The FMP outlines major operations in the field division that have been completed, the number of major drug trafficking organizations disrupted or dismantled, and other field division highlights. The Office of Inspections assesses the field division’s adherence to the FMP and its success in achieving the goals in the FMP. Case files and the “appropriateness” of arrests are also reviewed. A DEA Office of Inspections official told us that periodically a sample of arrests are validated and screened for appropriateness and for any type of questionable activity, such as “piggy backing” arrests. Piggy backing is when a state or local law enforcement agency does the investigative work and DEA goes along for the arrest, writes it up, and claims credit for the arrest. The official said that Inspections has consistently found a very low percentage of questionable arrests at the offices reviewed, but they do not accumulate or maintain a database of questionable arrests. The official gave one example of the New York office, where 2 or 3 questionable arrests out of over 8,000 were found. The official indicated that questionable arrests are mostly isolated incidents, and are not part of any systemic problems. If questionable arrests are found, the officials said that the arrest statistics would be removed from the agency database. In another example, out of approximately 2,200 arrests made by the Los Angeles office in 1999, the Office of Inspections found 16 that were questionable. The Los Angeles SAC said the arrests were questionable because the agents had not adequately documented DEA’s participation in the arrests to qualify for counting them. The Office of Inspections finally judged only 3 of the 16 as not being justified. These 3 involved cases where the Highway Patrol found drugs in suspects’ vehicles and called DEA out to establish probable cause to make the arrest. This is a gray area, according to the SAC, and DEA must show significant participation in order to claim the arrest. In the 3 cases, significant participation was not shown, and the arrests were not allowed, according to the SAC. The SAC also said that in addition to the periodic Office of Inspection evaluations, the Los Angeles office conducts its own yearly self- inspections, including reviews of the appropriateness of claimed arrests. DEA officials also told us that it is appropriate for each participating agency to claim the investigations and arrests that result from joint operations. The officials said that if only one agency could claim the investigations and arrests, agencies would not work together, cases would be much smaller, and the desired outcome of disrupting major drug trafficking organizations would not happen. Overall, DEA officials said that investigation and arrest statistics serve as indicators of agency work, to help determine whether or not something is being accomplished. DEA Domestic Operations officials said that from a managerial standpoint, DEA’s focus is more on who and what priority target organizations have been disrupted or dismantled. Statistical data are useful, the Domestic Operations officials said, because they provide a picture of activity; they are the evidence that validates the work performed. DEA officials from the Evaluation and Planning Section said investigations are DEA’s top output measure and provide basic information on workload. For example, if a group of 10 DEA agents and a supervisor had only 7 arrests for a year, management would want to look at the group to question its level of activity. It could be that the group was engaged in a very long, complicated wiretap case, which would not lend itself to many arrests. Arrests, however, would only be used as an indicator of activity, according to a SAC and Assistant Special Agents in Charge (ASACs). Investigation and arrest statistics are not used as performance indicators in various DEA-related materials. For example, in Department of Justice Performance Reports, we were unable to find any investigation and arrest statistics related to DEA. DEA, however, did use numbers of investigations and arrests in its budget justifications and congressional testimonies. For example, in the fiscal year 2003 budget request, DEA reported on an operation that resulted in 38 arrests. For the fiscal year 2004 budget request, DEA also provided tables that included numbers of national and local investigations, investigations completed, and total investigations. According to a SAC and ASACs, investigation and arrest statistics are not used for making agent promotion, award, and performance determinations. The officials said that promotions, awards, and performance ratings are based on many factors, such as levels of violator disruption; coordination efforts with interagency task forces; cooperative activities with other federal, state, and local law enforcement agencies; and furthering DEA’s mission. For example, two GS-14 field office group supervisors told us that statistics, particularly arrest statistics, do not play an important part in promotions, bonuses, or other awards. Promotion criteria used to evaluate agents for promotion to GS-14 and GS-15, for example, is based on the following competencies, which do not include investigation and arrest statistics: acting as a model, gathering information and making judgments/decisions, interacting with others, monitoring and guiding, oral communication, and planning and coordinating. A DEA Career Board official told us that when looking for the best- qualified applicant for a GS-14 or GS-15 position, a SAC could consider investigation and/or arrest statistics. For example, if a reference for the applicant is contacted, the reference may say that they highly recommend the applicant because of the applicant’s work on a certain case. A DEA official in Employee Relations said that awards and/or bonuses are distributed based on performance or a special act. A special act could include an agent’s involvement on a significant arrest or drug seizure. An agent’s supervisor writes the justification for an award and/or bonus. The division head and the Chief of Operations at DEA headquarters review the justification. DEA Domestic Operations officials said that performance evaluations that are used as a basis for promotions may well indicate that an agent “maintained a high level of cases” or “participated in several significant cases,” but performance decisions are not justified based on sheer numbers alone. The two field office group supervisors also said that if DEA has successful investigations, arrests will naturally follow but are not emphasized in promotion, bonus, and other award decisions. Domestic Operations officials also said that management’s “tone at the top,” which is emphasized throughout DEA, is not on how many people were arrested, but on what drug trafficking organizations were disrupted and dismantled. The Federal Bureau of Investigation’s (FBI) mission is to uphold the law through the investigation of violations of federal criminal protect the United States from foreign intelligence and terrorist activities; provide leadership and law enforcement assistance to federal, state, local, and international agencies. As of January 31, 2002, approximately 11,000 special agents and 16,000 professional support personnel were located at the FBI’s Washington, D.C., headquarters and in 56 field offices, approximately 400 satellite offices, 4 specialized field installations, and over 40 foreign liaison posts. The FBI’s budget was $4.6 billion in fiscal year 2003. An FBI investigation is referred to as either preliminary or full field. Officials of the Inspection Division told us that facts and circumstances have to rise to a certain level to justify opening either type of investigation and that the determination is somewhat judgmental. Full field investigations are initiated when there is information that raises a reasonable suspicion that a crime has been committed. If the information received is not deemed sufficient to predicate the opening of a full field investigation, but is determined to warrant further inquiry on a limited basis in order to determine the credibility of an allegation of criminal activity and the need for a more in-depth investigative effort, a preliminary investigation can be opened. The more sensitive investigations, such as foreign counter intelligence, are usually opened as preliminary investigations. Preliminary investigations can proceed to full field investigations. The officials told us that the amount of information needed to initiate an investigation is the same whether the FBI is working alone or is involved in a joint investigation. The Automated Case Support database is the FBI’s overall case management system that is used to capture information and data pertaining to each investigation. An agent initiates an investigation, either an FBI investigation or a joint investigation, by opening a hard copy investigative file, using an Investigative Summary Form 302. A supervisor must then approve the initiation. Information pertaining to the investigation is subsequently entered into database. We asked the FBI to provide us with information on investigations closed in fiscal years 1998-2001. The data provided included investigations pertaining to drugs, violent crime, white-collar crime, counterterrorism, counterintelligence, and cyber crime. These data are displayed in figure 4. Finance Division officials were unable to distinguish between investigations performed solely by the FBI and those performed jointly with other agencies because this information is not captured in the database. The FBI counts an arrest when the subject is taken into custody with a warrant, complaint, or indictment or, if arrested for probable cause, after the judicial paperwork is obtained. The FBI reports the arrest as federal, local, or international. Federal arrests are those in which FBI agents acting alone or with other law enforcement officers arrest the subject. The FBI does not count the arrest if the subject of an FBI investigation is arrested by another law enforcement agency without any assistance from the FBI. However, if the arrest is part of an FBI-led task force, the FBI does count the arrest even if no FBI agent is present. Local arrests are those where the FBI supplied information or other assistance to a local agency that significantly contributed to the probable cause supporting an arrest warrant for an individual who was not the subject of an FBI investigation, and FBI agents were not involved in making the arrests. International arrests are those where the FBI supplied information or other assistance to another country that significantly contributed to the probable cause supporting an arrest warrant for an individual who either was or was not the subject of an FBI investigation, and FBI agents were not involved in making the arrests. Arrests are reported on the Form FD-515, “Accomplishment Report.” This form captures such information as arrests, convictions, and the various investigative techniques used in an investigation. The agent prepares the FD-515; the supervisor reviews and approves the form, thereby attesting to the arrest as a valid accomplishment. The field office then enters the information into the Integrated Statistical Reporting and Analysis Application (ISRAA) database. Arrests, as well as other accomplishments, are to be entered into the system within 30 days. A variety of edit checks are performed to help ensure the reliability of the data input into ISRAA, and each field office completes an annual audit of the data. We asked the FBI to provide us with information on arrests in fiscal years 1998-2001. These data are displayed in figure 5. While the participation of other agencies is noted on the FD-515, this information is not entered into ISRAA and, as a result, the FBI could not distinguish between arrests made solely by the FBI and those made jointly with other agencies. In addition to the supervisory reviews of investigations and arrests already noted, supervisory agents are to perform periodic investigative file reviews on all investigations being worked by their agents. These reviews occur about every 90 days for investigations led by experienced agents and about every 30 days for investigations led by less experienced agents. During these file reviews, supervisors are to monitor the progress of investigations by reviewing investigative work completed; verifying compliance with any applicable policies and procedures, including those pertaining to any arrests that have been made; and assessing the validity of continuing with the investigation. FBI officials also told us that field offices’ Assistant Special Agents-in-Charge periodically are to check supervisory file reviews to ensure the adequacy of the review process. In addition, the FBI’s Inspection Division is responsible for reviewing FBI field offices and program divisions to ensure compliance with applicable laws and regulations and the efficient and economical management of resources. The Inspection Division is to inspect all FBI units at least once every 3 years. An Inspection Division official told us that the division’s reviews include information pertaining to arrests and that, if it were determined that an arrest had been inappropriately counted, action would be taken, including correction of the data in ISRAA. The officials also said that they had not found evidence of the inappropriate counting of arrests over the period of our review, fiscal years 1998-2001. Inspections Division officials told us that all agencies involved in a joint investigation count investigations and arrests and that this is a long- standing and accepted practice that is part of interagency cooperation. If each agency involved could not count the statistics, there would be more competition among agencies and less participation in joint investigations, according to the officials. Investigation and arrest statistics are used as indicators of the FBI’s work for a variety of purposes, including management of field offices. For example, a field office’s numbers for a particular type of crime might spike upward. Finance Division officials said that if a spike occurred, a determination would be made about what happened to account for the spike and changes might be made in how the office performs investigations or where it focuses its resources. The officials told us that field offices use numbers of investigations and arrests as part of justifications for resources, along with other factors such as the type of crimes investigated by the office and whether there has been continual growth in the numbers. Numbers alone are not determinative, however, according to Inspection Division officials. The officials said that FBI headquarters also looks at trends, where agents are placing their investigative emphasis, political factors such as what legislation might be on the horizon, and problems agents are encountering that might indicate a need for new technology. Investigation and arrest statistics are also used as performance indicators in various FBI-related materials. For example, in Department of Justice Performance Reports, the number of terrorism investigations is reported, along with the number of related convictions. The FBI has also used numbers of investigations and arrests, as well as indictments and convictions, in its budget requests and congressional testimonies. For example, in the fiscal year 2003 request, fiscal years 2000 and 2001 actual numbers of investigations pending, opened, and closed are given, along with numbers of arrests, indictments, and convictions. In addition, in congressional testimony on the fiscal year 2002 budget request, the FBI Director said that overall, during fiscal year 2000, FBI investigations contributed to the indictment of over 19,000 individuals, the conviction of over 21,000 individuals, and the arrest of more than 36,000 individuals. Concerning performance management measures, an official from the Executive Development and Selection Program told us that certain numbers of investigations and arrests are not required for promotions, bonuses, or awards. The official told us that the statistics are used only as indicators of an agent’s performance and that it is necessary to look at the work behind the statistics. For example, an agent may have zero arrests but be involved in a complex investigation that has not yet resulted in arrests. Or, an agent may have a high number of arrests resulting from relatively simple investigations. Special agent promotions are scheduled at regular intervals from GS-10 or GS-11 to GS-13 and are contingent upon the satisfactory work record of the individual. Promotions to GS-14 and GS-15 are competitive, but the official told us the vacancy announcements are not yet standardized. Agents applying for promotion must describe how they meet each of the qualifications listed on the announcement. A new system was implemented in January 2004 that will emphasize competencies for each vacancy. The first four competencies will be core competencies and the last three will be specialized. For example, if the vacancy were in the counterterrorism unit, experience with counterterrorism would be listed. When applying for a promotion, an agent will complete a form and address his or her education and training, pre-FBI experience, FBI background, and give two examples of how he or she meets each of the required qualifications. The official from the Executive Development and Selection Program said that the quality of work experience, rather than quantity, is emphasized in the promotion process. There is no baseline for the number of investigations or arrests that an agent must demonstrate. There is no place provided on the application for investigation or arrest statistics, though officials said that an agent could provide these in the narrative should he or she choose to do so. For example, an agent could indicate that he or she demonstrated leadership by being the agent on 25 investigations resulting in 52 convictions; the agent could also discuss a specific investigation as an example of leadership. Agent performance is evaluated annually on seven critical elements (which do not include investigation and arrest statistics) using a meets or does not meet expectation system. A narrative justification is required only if the agent does not meet expectations. The critical elements include the following: investigating, decision making, and analyzing; organizing, planning, and coordinating; relating with others and providing professional service; acquiring, applying, and sharing job knowledge; maintaining high professional standards; and communicating orally and in writing. An officer of the FBI Agents Association confirmed that investigation and arrest statistics are not used in the performance appraisal process. The official from the Executive Development and Selection Program told us that a noteworthy accomplishment might be used as the basis for a special award. The FBI uses its awards program to motivate employees to increase productivity and creativity. To receive an award, an agent must be shown to have significantly exceeded the requirements of his or her position. Field offices can give awards differently. One office might give an award for outstanding performance on one investigation. Another office might give an award based on sustained success—for example, continuous, outstanding performance with making arrests or obtaining convictions. The U.S. Postal Inspection Service’s (USPIS) mission is to protect the U.S. Postal Service, its employees, and customers from criminal attack and to protect the nation’s mail system from criminal misuse. In fiscal year 2003, USPIS had 1,955 Postal Inspectors operating in 18 field divisions in the United States with a budget of $521.7 million. USPIS enforces over 200 federal laws in investigations of crimes that may adversely affect or fraudulently use the U.S. mail, the postal system, or postal employees. USPIS cases involve crimes that have a nexus to the Postal Service. For example, postal-related violations including mail theft, identity fraud, child exploitation, illegal drugs, and money laundering are investigated by USPIS. A USPIS investigation is referred to as a “case.” To open a case, an inspector fills out a request for case, Form 623. A Form 623 must be approved and signed by a supervisor before a case is opened. The case is then entered into the Inspection Service Database Information System (ISDBIS). In the narrative section of Form 623, inspectors at their discretion may note whether the case is a joint investigation and whether any other agencies are involved; however, indicating whether a case is a task force operation or whether other agencies are involved is optional. A data entry operator will enter this information into the narrative section of the case in ISDBIS, which cannot currently be retrieved to identify USPIS cases alone or USPIS as part of a joint operation. A USPIS official in the Information Technology Division, however, told us that USPIS plans to launch a new database, which will be fully operational in Fall 2004 and will identify cases as joint task force operations when applicable and the other law enforcement agencies involved. However, it would still be optional for inspectors to indicate whether a case is a joint task force operation. The new system will be able to run reports listing how many cases are joint task force cases and identify the agencies that are participating. For example, a report could be generated naming all cases USPIS is working on with the FBI. USPIS provided us with numbers of cases closed from fiscal years 1998 to 2001, as shown in figure 6. According to USPIS’s Inspection Service Manual, inspectors are to claim (i.e., count) an arrest when an inspector participated personally in making an arrest or contributed significantly to an investigation resulting in an arrest made by another law enforcement agency; an inspector’s investigative efforts with another law enforcement agency motivate and materially contribute to the identity and arrest of a person for a postal crime even though the inspector was not present at the time; or an inspector’s investigation of a postal offense develops additional, significant evidence that is brought to the prosecutor’s attention. The determination of material contribution is left to the supervisor’s judgment. For example, if a postal inspector alerted the highway patrol to an individual suspected of mail theft, or if the inspector was conducting an ongoing investigation on the suspect and the highway patrol made the arrest, the arrest would be claimed by USPIS even though postal inspectors did not make the physical arrest. An inspector fills out a Case Activity Report (CAR) to report case statistics and to summarize significant developments in the case, including arrests. A supervisor must approve each CAR and the arrests before entering into ISDBIS. After a CAR is submitted and entered into ISDBIS by a data entry operator, a Case Summary Report is printed and sent to the originating inspector to be verified for accuracy. ISDBIS tracks the number of arrests counted by USPIS. Arrests made as a result of joint operations are counted the same as those that result from investigations involving USPIS only. ISDBIS currently cannot sort arrests made as a result of joint operations and those in which only USPIS was involved. According to an official in the Information Technology Division, the new database, scheduled to be implemented early next year, can separate arrests resulting from joint operations from those involving USPIS alone. USPIS provided us with the total number of arrests made from fiscal years 1998 to 2001 as shown in figure 7. Prior to fiscal year 2003, USPIS had an Office of Inspections Division that had overall responsibility for conducting quality assurance reviews of field divisions and headquarters groups/divisions. As part of a USPIS reorganization in fiscal year 2003, the quality assurance review responsibility was assigned to the Strategic Planning and Performance Management Group at USPIS headquarters. A USPIS official in the Strategic Planning and Management Group told us that the group is to review each field office every 3 years for compliance to USPIS policies and procedures. In addition, according to a Deputy Chief Inspector, USPIS field offices are to conduct annual comprehensive self-assessments and that the contents of case files are to be reviewed for accuracy during that process. For example, arrests counted and hours worked on an investigation are two items that are reviewed for accuracy. The official said that overall, USPIS has found minimal problems with the accuracy of its case files and has not found any problems with a specific USPIS field division counting incorrect case and arrest statistics. A Deputy Chief Inspector and an Assistant Chief Inspector told us that double counting investigation and arrest data is acceptable and vital in showing the results of USPIS efforts. For example, the officials said that if arrests generated by a joint operation could be claimed by only one of the involved agencies, turf battles would result. In addition, one agency may end up “looking better than another.” This would result in law enforcement agencies refusing to work with one another and there would be no more task forces, according to the officials. The officials also said that task forces are needed because cases are often complex in nature, and joining forces with other law enforcement agencies streamlines, economizes, and makes operations more efficient. A Deputy Chief Inspector and an Assistant Chief Inspector told us that Inspectors in Charge of field divisions determine where resources are needed by several indicators, including case and arrest statistics. They told us that a “briefing book” is prepared monthly using case and arrest data from ISDBIS. The briefing book contains an overview of each functional area (i.e., fraud, dangerous mailings, and child exploitation) and an analysis on whether USPIS is meeting its goals. The data in the briefing book are shared with all field offices. The officials said that looking at indicators and determining where employees are assigned is a part of their management system. A Deputy Chief Inspector also told us that investigation and arrest statistics are used as indicators in performance measurement and planning. The performance plan provides the basis for performance agreements, or field division contracts, and an annual performance report. The performance plan is divided into operational objectives that support USPIS’s strategic goals. Each operational objective has an indicator(s), which measures how closely USPIS met the objective, and a target to meet in the upcoming fiscal year. Each field division is evaluated on its results as measured against its objectives for each fiscal year; for example: Operational objective: Identify and resolve domestic and international in-transit mail theft. Indicators: Major domestic and international airport mail theft problems resolved (arrests count towards resolving mail theft cases). Target: Thirty major domestic and international airport mail theft problems resolved. USPIS also issues an annual report of investigations, which is a document on the fiscal year’s activities, to the Postmaster General, the Postal Service Board of Governors, and postal managers and employees. The report details USPIS’s investigative activities in criminal areas, such as mail theft and robbery. The number of arrests and convictions in each criminal area are also listed in the report. USPIS officials also told us that data on an inspector’s numbers of investigations and arrests are not required for making promotion, bonus, and award determinations. The knowledge, skills, and abilities requirements for promotion to the GS-13 and GS-14 levels, for example, do not list “number of arrests” as a competency. The officials said that promotion criteria to the manager level, GS-15, is based on competencies, including customer focus, interpersonal skills, problem identification and analysis/decision making, supervisory/management skills, strategic leadership, planning and organizing, project/program management, oral communication, written communication, ability to work autonomously, and ability to be flexible in diverse situations. USPIS officials said that awards and bonuses are usually given for performances above and beyond normal expectations, not just for making arrests. An inspector or team, for example, that makes a large number of arrests as culmination of an investigation could receive an award for the arrests, according to the officials. Concerning inspector performance measurement, a Deputy Chief Inspector and an Assistant Chief Inspector said that investigation and arrest statistics are one of many indicators of an individual’s performance. Performance management focuses on linking an inspector’s goals to national goals, rather than on arrest quotas. Team leaders in the field are responsible for helping individual inspectors set goals for the year through creating a performance achievement plan. A Deputy Chief Inspector told us that case and arrest data are not used in their budget process. USPIS’s budget is historically based—meaning it is based on the previous year’s budget. It is dependent on U.S. Postal Service finances and is not subject to the congressional appropriations process. The U.S. Customs Service was responsible for ensuring that all goods and persons entering and exiting the United States did so legally and was to, among other things, assess and collect Customs duties, excise taxes, fees, and penalties due on interdict and seize contraband, including narcotics and illegal drugs; and protect American business and labor and intellectual property rights by enforcing U.S. laws intended to prevent illegal trade practices. To accomplish its mission, in fiscal year 2002, Customs had a workforce of over 20,754 employees, 3,031 of which were Special Agent Criminal Investigators. Customs’ budget was $3.6 billion in fiscal year 2002. From fiscal years 1998 to 2001, the terms investigations and cases meant the same thing and were initiated by special agents working from information received from various sources, tips, or confidential informants. Cases could have also been initiated by other federal, state, and local law enforcement agencies requesting assistance or through joint task force operations. Customs could also have requested other agencies’ assistance on its cases, and most cases did involve other federal, state, and local law enforcement agencies, according to Customs Office of Investigation officials. Customs officials also said that in fiscal years 1998 to 2001, cases were entered into the Case Management System within the Treasury Enforcement Communications System (TECS) when agents, with first line supervisor’s approval, originated them. A case number was created, which included the office identifier (e.g., Los Angeles), the type of case (e.g., money laundering, narcotics, etc.), and information on how the case got started and who originated it. Other federal agency participation was usually mentioned in the case summary, but was not identified in a separate TECS field used for generating case statistics, according to the officials. As shown in figure 8, Customs provided us with the numbers of cases closed between fiscal years 1998 and 2001. According to Customs officials, Customs agents could have counted an arrest if they physically made the arrest; assisted in making the arrest; or discovered the violation, but the arrest was made by other law enforcement officers. During fiscal years 1998 to 2001, numbers of arrests were captured in TECS when agents filled out Reports of Investigation and entered the details into the system. Customs officials said that agents were to provide details of their participation in the arrests in the case file. The narrative was to be reviewed and approved by a first line supervisor before the arrests would be counted. TECS data fields required agents to record whether they were the arresting officers, and if a Customs agent was not the arresting officer, then the arresting officer’s name and agency was to be input into the system. Customs agents would not count as an arrest the stopping or detaining of an individual for questioning. Numbers of arrests were not formally audited for questionable claims, and it was up to the first and second line supervisors to check the integrity of the investigations and subsequent arrests that were counted. Figure 9 shows numbers of arrests and agencies that were involved in the arrests. Customs officials said that TECS data entry systems are to preclude two or more Customs offices from claiming the same investigations or arrests, so there was no double counting within Customs. Whether other agencies counted investigations and arrests that they worked on with Customs was not clear to the officials, but they assumed that they did, and rightfully so. The officials said that cooperation and trust in working together would be destroyed if participating agencies were not allowed to count investigations and arrests in which they participated. From fiscal years 1998 to 2001, Customs officials told us that investigation and arrest statistics were used as one way to measure productivity. The officials said, however, that arrests were only one element in a number of performance management measures, including numbers of cases, seizures, indictments, prosecutions, and successful wiretaps, for example, that were used to gauge an office’s or agent’s performance throughout the year. Officials said that numbers of arrests were reported from the field to headquarters twice a year as one way of showing what had been accomplished. The statistics were readily available, according to the officials, but special agents were not told that they did not have enough arrests or would need to increase the number of their arrests. According to Customs Human Resources officials, investigation and arrest statistics were not used for special agent promotion purposes. The officials said that promotions up to the journeyman level were based primarily on the recommendation of first line supervisors. The journeyman level was to the GS-12 level through 2000, but was raised to the GS-13 level beginning in 2001. Before 2000, promotions beyond the journeyman level were competitive and were based on applicants responding to a series of set questions regarding the type, complexity, and variety of investigations, not the quantity of investigations and arrests. The questions were weighted, a score was generated, and a roster of applicants eligible for promotion was developed. Beginning in 2000, Customs initiated the SA14 Promotion Test System. This system was for promotion to the GS 14 level and included three tests—critical thinking skills, job knowledge, and an assessment of administrative and planning skills. Once the applicants had passed these tests, they were further assessed through a structured interview, which assessed additional leadership skills via situational questions about how the applicants would handle the various situations. Customs officials said that promotion to GS-15 is based on a merit promotion system, which used knowledge, skills, and abilities that were developed for the specific position. Also in fiscal years 1998 to 2001, Customs officials said that investigation and arrest statistics were used to some degree in award and bonus decisions, but so were other factors, such as successful court appearances and prosecutions. The Immigration and Naturalization Service’s (INS) primary mission was to administer and enforce the nation’s immigration laws. Among other things, INS activities included determining the admissibility of persons seeking to enter the United States through an inspections process, facilitating entry processing and granting immigration-related benefits, patrolling the borders, deterring and investigating illegal employment, providing information to employers and benefit providers to prevent illicit employment or benefit receipt, and disrupting and dismantling organizations engaging in document and benefit fraud and alien smuggling. In addition, INS apprehended, detained, and removed aliens present in the United States without lawful status and/or those who have violated U.S. criminal laws. As individual aliens engaging in criminal activity and organizations facilitating illegal immigration are often associated with other criminal activity, INS also played a role in enforcing U.S. criminal laws. To perform its mission, in fiscal year 2002, INS had a total of 36,117 employees with a budget of $6.2 billion. The mission was accomplished through INS’s operational offices located on the border, in the interior, and overseas and through numerous special facilities (e.g., detention centers, applications processing centers, and national records repository) throughout the United States. INS’s Investigations Division was the enforcement arm of the INS charged with investigating violation of the criminal and administrative provisions of the Immigration and Nationality Act and other related provisions of the United States Code. For INS, the investigative case process began with the receipt of a complaint or other lead by the Investigations Division that provided a “reason to believe” that a violation of law may have occurred. An investigation could have been opened as either a preliminary or a full field investigation. In either case, supervisory approval was required to initiate an investigation. A preliminary investigation was opened when a lead or allegation was not sufficient enough to warrant a full investigation. In those instances, limited investigative activities would have been conducted solely for the purpose of providing enough additional information on which to make an informed judgment as to appropriate disposition of the matter at hand. Preliminary inquiries were ordinarily assigned for a period not to exceed 30 days. At the end of that period, a decision was to be made whether to close the investigation without further action, extend the inquiry for no more than an additional 30 days, or assign the matter for a full field investigation. A full field investigation may have been opened on the basis of sufficient, articulable facts that were in existence at the time of initial review, developed during the conduct of a preliminary inquiry, or assigned as a headquarters-designated case. Full field investigations consisted of all investigative or enforcement activities necessary to bring an investigation to its logical conclusion. Under INS’s Investigations Case Management System (ICMS), a Form G-600 was prepared when an investigation was opened. The G-600 was basically an index card used to track and document the progress or termination of investigations. Information about the investigation, such as the case number, date opened, agent assigned, etc., was initially recorded on the G-600. Additional information would have been added to the G-600 as the investigation progressed. First line supervisors maintained the G-600s for the investigations by agents in their units. Each investigative unit (e.g., field office, port of entry, or border patrol office) prepared a detailed monthly report, called an Investigations Activity Report of Field Operations (G-23 Report). The G-23 Report was a record of the number of cases opened or completed, the number of hours worked, and the results of the investigations. The report was a matrix of rows and columns, with the columns showing the number of cases received, opened, completed, etc., and the rows showing the category of cases, for example, trafficking, criminals, or employers. Hard copies of the G-23 Reports were maintained at the unit level and required supervisory signature. At the beginning of each month, the data from the G-23s for the previous month were keyed into the Performance Analysis System (PAS). Each office was to close out its monthly statistical reporting on the last working day of each month. They then had 8 working days to consolidate the unit workload counts into office level totals and key the data into PAS. After the eighth day, the PAS system was to be locked down and no further data entry would have been possible by the field offices. The data were strictly numbers of activities and did not identify individual investigations or agents. As shown in figure 10, INS provided us with the number of investigations opened and closed for fiscal years 1998 through 2001. According to INS’s manual, The Law of Arrest, Search, and Seizure for Immigration Officers, an arrest occurred when a reasonable person in the suspect’s position would conclude that he or she was under arrest. An arrest did not depend solely on whether the officer had announced that the suspect was under arrest. An arrest was to be supported by probable cause to believe that the person had committed an offense against the United States. Probable cause is knowledge or trustworthy information of facts and circumstances that would lead a reasonable, prudent person to believe that an offense had been committed or was being committed by the person to be arrested. An INS officer was authorized to make arrests for both administrative (civil) and criminal violations of the Immigration and Nationality Act. According to INS officials, a Form I-213, Record of Deportable Alien, was to be completed to record administrative arrests, which were the bulk of INS arrests. A Form G-166, Report of Investigation was to be completed when a criminal arrest was made. Supervisory review and signature at the bottom of the forms verified that the arrest occurred. Arrests were counted and recorded into the PAS system in the same manner as the investigations. That is, at the end of each month, a manual count of the arrest forms would be made and then support staff would enter the data into PAS. Officials told us that INS’s Office of Internal Audits conducted reviews of investigation files. Each district office was to be reviewed about every 3 years. For each district, a representative number of investigations would be reviewed to determine whether INS policies and procedures were followed and that all investigative documentation was complete. While the purpose of these reviews was to ensure that INS procedures were followed, an INS official said that the review could be considered as verification that the documentation of the arrests were valid and proper. INS officials told us that, after the data have been entered into PAS, INS’s Operational Analysis Branch (OAB) printed out a Monthly Statistics Report, which was distributed to INS upper level management. OAB also printed a Workload Summary Report on a quarterly basis. According to INS officials, the reports were an accounting of the work INS performed and could have been used to assist in making resource allocations and staffing decisions. The officials said that the data could also have been reviewed to see if there were any trend indicators about shifts in criminal activity. INS Human Resources officials said that INS employee performance evaluations were not based on investigation and arrest statistics. Rather, employees were evaluated on how well they performed their jobs. INS agents were evaluated annually, and their supervisor wrote up narratives about how well an agent was performing. There were no set job elements that had to be covered, and the supervisor determined what should be evaluated and how well the agent was performing. Concerning promotions, the Human Resources officials said that promotions up to the journeyman level were based primarily on the recommendation of first line supervisors. Promotions beyond the journeyman level were competitive and based on a scored assessment, which covered the following four critical factors: Job experience: This factor was worth 30 percent and described the assignments the individual had had and other collateral duties. Decision making: This factor was worth 30 percent and tested the individual’s decision-making process and problem solving abilities by asking a series of questions about hypothetical situations. In-basket job simulation: This factor was worth 20 percent and tested the individual’s administrative skills in organizing work, setting priorities, delegating work, etc. Individuals were given 45 minutes to review a series of documents and then given 45 minutes to answer 50 questions about how to deal with certain events on the basis of the documents they reviewed. Managerial writing: This factor was worth 20 percent and tested the individual’s writing skills and knowledge of proper grammar syntax, paragraph structure, and report organization. INS provided, via the Department of Justice, budget requests to Congress each fiscal year. We reviewed INS’s budget requests for fiscal years 2003 and 2002 to determine the extent, if any, investigation and arrest statistics were used as justification for increase resources. Investigation and arrest statistics were not used, in either table or narrative forms, as a basis for justifying an increase in resources. Investigation and arrest activities were discussed, however, but in the very broadest terms, for example: “Although an eventual reduction in arrests is a primary indicator of illegal entry attempts (and therefore deterrence), other critical indicators include decrease in border related crime, decrease in recidivism, shifting of illegal activity to non-traditional points of entry and through non-traditional methods, increase smuggling fees, increase in property values and commercial and public development along the border, etc.” “INS will initiate high priority investigations, conduct asset seizures, and present individuals for prosecution for alien smuggling related violations to disrupt the means and methods that facilitate alien smuggling.” “As a result of INS efforts, many alien smugglers, fraud organizations, and facilitators were arrested and presented for prosecution; assets were seized; and aliens with a nexus to organized crime, violent gangs, drug trafficking gangs, or who have terrorist-related affiliations, were apprehended.” The U.S. Marshals Service’s mission is to protect the federal courts and the judicial system, apprehend federal fugitives, and manage seized assets. Regarding federal fugitives, the Marshals Service’s responsibilities are to locate and arrest federal fugitives, including prison escapees, bail jumpers, and parole and probation violators; enforce bench warrants issued by federal judges and warrants issued at the request of other federal agencies; and serve as the “booking agent” for suspects arrested for federal offenses. To perform its mission, in fiscal year 2002, the Marshals Service had a total of 4,134 employees, of which about 2,700 were U.S. Marshals and Deputy U.S. Marshals. The Marshals Service’s budget was $676.5 million in fiscal year 2002. The Director, Deputy Director, and 94 U.S. Marshals direct the activities of 95 district offices and personnel stationed at more than 350 locations throughout the 50 states, Guam, Northern Mariana Islands, Puerto Rico, and the Virgin Islands. For the Marshals Service, an investigation consists of locating and arresting a federal fugitive. The Marshals Service initiates fugitive investigations in response to two basic scenarios. In the first, an individual has already been in the federal criminal justice system and has subsequently become a fugitive. The fugitive may have failed to make a court appearance, escaped from custody, or violated the terms of parole or supervised release. In each of these instances, the court issues a warrant and the Marshals Service is responsible for investigating, apprehending, and arresting the fugitive. In the second scenario, another law enforcement agency has investigated an individual, the individual has been indicted and a warrant issued, and the agency requests that the Marshals Service make the apprehension. Unlike other law enforcement agencies that investigate the commission of a crime, the Marshals Service investigations primarily consist of locating (tracking down) and arresting federal fugitives. The Marshals Service uses its Warrant Information Network to track the number of fugitive warrants received and closed. The network is a computer-based automated system that manages records and information collected during investigations of fugitives. The system can also provide data for analyses that are used to report information to Congress or for management purposes, for example, to provide a listing of active warrants for a specific offense or for a district or suboffice. Figure 11 shows the number of warrants closed for fiscal years 1998-2001. The Marshals Service takes custody of all federal prisoners arrested by all federal officials empowered to make arrests. The Marshals Service Prisoner Tracking System (PTS) maintains a record of all suspects arrested for federal offenses and booked by the Marshals Service. The Marshals Service claims arrests on its workload statistics if a deputy marshal actually makes the arrest, based on a federal fugitive warrant. If another law enforcement agency brings a prisoner to the Marshals Service for booking, that agency would be recorded as the arresting agency in the PTS. When either a deputy marshal or an agent from another federal agency, such as the DEA, FBI, or Customs Service, presents a federal prisoner for booking by the Marshals Service, the following procedures are followed: A Marshals Service Form 312 (Prisoner Personal History) is filled out, and the prisoner is fingerprinted and photographed. The form contains background information on the prisoner, the charges, case number, and which agency brought the prisoner in for booking. The agent (e.g., DEA) that fills out the Marshals Service Form 312 is called the lead agent and that agency will be credited with the arrest in the PTS. Only one agency is listed as the arresting agency even though many agencies may have participated in a joint operation through a task force. However, there is a space for indicating whether the case was a joint operation, but it is not mandatory to fill it out. The Form 312 information is entered into the PTS. Marshals Service officials told us that investigation and arrest statistics are used as workload measures; for example, to show how many prisoners were produced for court appearances. With these statistics, they said that they could show workload projections to justify budget requests. The officials also said that investigation and arrest statistics are used to manage programs, set policies, and allocate funds and positions. For example, on the basis of an assessment of workload statistics, the number of positions at a particular courthouse was decreased in fiscal year 2002. Marshals Service officials also said that investigation and arrest statistics are not used for making promotion, bonus, or award decisions. Criteria that are considered for promotions, for example, include time in grade, technical knowledge, analytical/problem solving ability, time management, and interpersonal relationships. For higher grades, management skills-- including organization and planning, budget management, and human resource management--are also considered for promotions. We found Operation Marquis on the Drug Enforcement Administration’s (DEA) Web site when we searched for drug trafficking investigations involving multiple federal law enforcement agencies. The Web site indicated that Operation Marquis was coordinated by DEA’s Special Operations Division (SOD)—a joint Department of Justice (Justice), DEA, the Federal Bureaus of Investigation (FBI), U.S. Customs Service (Customs), and the Internal Revenue Service program—and was conducted in 1999, 2000, and 2001. Attorneys from Justice’s Criminal Division, and agents and analysts from participating law enforcement agencies staffed the investigation. Operation Marquis targeted a Mexico-based drug trafficking organization responsible for putting tens of millions of dollars worth of cocaine and marijuana on the streets of at least a dozen U.S. cities. According to DEA’s Web site, over 300 individuals were arrested as a result of the operation. In addition to arrests, the investigation resulted in the seizure of 8,645 kilograms of cocaine, 23,096 pounds of marijuana, 50 pounds of methamphetamine, and $13 million in U.S. currency. On DEA’s Web site, several Justice and Customs officials commented on the success of Operation Marquis, including: “These law enforcement activities will have a measurable impact on drug trafficking across our Southwest Border. The work completed in this case emphasizes the importance of interagency cooperation in targeting and investigating drug trafficking organizations.”—from an FBI assistant director. “This investigation demonstrates what can be achieved when law enforcement efforts are coordinated and resources are pooled. Operation Marquis shut down a sprawling criminal network that plagued communities throughout the country.”—from the then-Acting Customs Commissioner. We asked DEA and the FBI to provide us with the names of individuals that they had counted in their arrest statistics for Operation Marquis. Both agencies independently generated a list of individuals they counted as having been arrested as part of Operation Marquis. Specifically, DEA counted 331 arrests and the FBI counted 154 arrests. After comparing the lists, we were able to match eight names as having been counted as arrests by each agency. We also asked DEA and FBI officials whether they had been designated as the lead or assist agency, how many special agents they had assigned throughout the investigation and at various times during the progress of the investigation, and whether their agents were physically present during the arrests. In addition, we asked if the amount of time their agents spent on the investigation could be determined. DEA officials told us that the investigation was initiated through its SOD and that DEA was the lead agency. DEA told us that there were about 46 lead criminal investigators assigned to Operation Marquis; however, because their statistical data systems do not record such information, DEA officials could not tell us whether DEA agents were present at the arrests counted by DEA. A DEA official, however, did tell us that agents logged 217,937 work hours on investigations that comprised Operation Marquis. FBI Criminal Investigation Division officials also told us that Operation Marquis was a DEA-initiated investigation and that DEA was the lead agency. Seven FBI field offices were involved: San Antonio, Houston, Dallas, Memphis, Charlotte, Kansas City, and Little Rock. Because their statistical data systems do not record such information, the FBI officials said that there is no way to determine exactly how many agents participated in some manner with Operation Marquis. However, they said SOD investigations typically have two FBI staff coordinators assigned to an investigation. In addition, each field office typically assigns one lead agent, who works an investigation full-time, and possibly a co-case agent, who would work the investigation part-time. There also could have been several agents in any given field office working on or assisting with parts of the investigation on a part-time basis, for example, helping to setup or monitor a wiretap. Also, because their statistical data systems do not record information by operation, FBI officials said that there is no way to tell how many agents participated in the physical arrest of individuals for Operation Marquis.. Usually, at least one or two FBI agents make or assist in an arrest, but the officials could not tell us the role played by their agents with regard to any of the individual arrests. Moreover, FBI officials could not tell us how many hours agents spent on the investigation because agents do not record their time by case number, but rather by the type of work performed, such as bank robbery We asked DEA and FBI officials whether, as a result of the numbers of arrests, special agents were given awards, promotions, bonuses, etc. DEA officials said that they did not know how many, if any, agents received awards, promotions, or bonuses for their work on Operation Marquis. The FBI Criminal Investigation Division officials said they knew of no awards, promotions, or bonuses given as a result of Operation Marquis. A U.S. Customs Service (Customs) Special Agent in Lake Charles, Louisiana, initiated Operation Bayou Blaster on October 1, 1994. The Customs agent developed a plan to set up an undercover operation that targeted individuals involved in the sexual exploitation of children via the Internet. Customs and U.S. Postal Inspection Service (USPIS) officials told us that a fake child pornography Web site was used to arrest individuals who ordered material from the Web site. USPIS was asked to participate by making some of the deliveries of the pornographic material, but not all. All operational and undercover activity related to Operation Bayou Blaster ceased on February 23, 2001. Customs arrested 100 individuals through the efforts of more than 400 special agents who were involved at various times over the 6-year duration of the operation. We asked Customs and USPIS to provide us with the names of individuals included in agency arrest statistics for Operation Bayou Blaster. Customs provided us with a list of the 100 individuals that were arrested, but USPIS was not able to generate a list of arrests that resulted from Operation Bayou Blaster. However, at our request, USPIS crosschecked its database with the 100 names provided by Customs. USPIS was able to match 30 arrestees, who had the same dates of birth, year arrested, and location arrested, in their database as the Customs list. USPIS officials noted that USPIS was asked to assist on only 30 of the deliveries. We also asked Customs and USPIS officials whether they had been designated as the lead or assist agency, how many special agents they had assigned throughout the operation and at various times during the progress of the operation, whether their agents were physically present during the arrests, and their roles in the arrests. In addition, we asked if the amount of time their agents spent on the operation could be determined. Customs officials said that Customs was the lead agency for the operation. They told us that of the over 400 agents involved through the more than 6 years of the operation, many of them participated in the arrests. However, because their statistical data systems do not capture such information, they were unable to tell us whether the agents were physically present at the arrests, or what exactly were their roles in the arrests. The officials provided a list showing that Customs agents had charged over 78,000 hours to this operation. USPIS officials told us that Customs was the lead agency for the operation, and USPIS had no lead special agent for Operation Bayou Blaster. The officials said that Customs calls on USPIS all the time to assist in the delivery of pornographic materials. A USPIS official said 20 inspectors were involved with the 30 arrests claimed by USPIS; however, because their statistical data systems do not capture such information, the official was unable to say whether the inspectors were physically present at the arrests or what role the inspectors played in the arrests. The official told us that inspectors logged over 1,700 hours on Operation Bayou Blaster. We asked Customs and USPIS officials whether, as a result of the arrests, special agents were given awards, promotions, bonuses, etc. As far as Customs officials knew, there were no awards, promotions, or bonuses given to the agents as a result of the Operation. The officials also said that with the large number of Customs agents involved in the Operation, it would be difficult to determine whether any awards, bonuses, or promotions were a direct result of the Operation. USPIS officials could not tell us if inspector participation in Operation Bayou Blaster resulted in any awards, bonuses, or promotions. We discussed a closed counterterrorism operation with Federal Bureau of Investigation (FBI) and Immigration and Naturalization Service (INS) officials. Between October 11, 2001, and April 17, 2002, the Joint Terrorism Task Force (JTTF) in New Orleans conducted this operation based on information that a telephone number associated with one subject in the United States had been contacted by a pay phone known to be used by the Taliban/al Qaeda in Afghanistan. Shortly after the operation began, an INS agent assigned to the JTTF was asked to obtain INS files on six subjects to determine their immigration status in the United States. It was found that the six subjects had been released from INS custody while seeking asylum. As a result of reports of suspicious activity, the INS district director decided to revoke their paroles and, on January 12, 2002, the six individuals were taken back into custody while their asylum applications or appeals were pending. At the conclusion of their cases, four of the individuals were removed from the country, and, as of February 6, 2004, two were being held for removal. We asked the FBI and INS to provide us with the names of individuals included in agency arrest statistics for this investigation. The FBI provided us with a list of the six individuals whose arrests were included in its statistics. INS provided us with a list of the six individuals, but told us that even though these six individuals were taken into custody, INS did not include them in agency arrest statistics because INS had previously arrested these individuals in 1999; these arrests were counted for statistical purposes at that time. Comparison of the lists of names provided by the FBI and INS revealed that the six individuals arrested by INS in 1999 and taken back into custody in 2002, and the six individuals counted as arrests by the FBI in 2002, were the same six individuals. While apprehending these individuals, FBI and INS agents encountered and arrested two other aliens whose immigration documents were no longer valid. INS counted these two subjects as arrests for, respectively, a nonimmigrant overstaying his or her visa and illegal entry into the United States. We asked FBI and INS officials whether they had been designated as the lead or assist agency, how many special agents they had assigned throughout the operation and at various times during the progress of the operation, and whether their agents were physically present during the arrests. In addition, we asked if the amount of time their agents spent on the operation could be determined. FBI Counterterrorism Division officials told us that both the FBI and INS initiated the operation, with the FBI taking the lead in the operation. FBI officials said two FBI agents were assigned full-time and numerous others helped with such matters as surveillance. The officials could not tell us how many hours agents spent on the operation, since agents do not record their time by operation, but rather by the type of work performed, such as bank robbery. INS officials also told us that the FBI initiated the operation and that INS became involved when the FBI asked INS agents on the JTTF for their assistance. INS officials told us that no INS agents were specifically assigned to the operation; one of the two INS agents in the JTTF assisted in this operation. After the apprehensions on January 12, 2002, the INS agent assisted the FBI in the continued operation—checking the status of other aliens whose names appeared in records, reviewing INS files, assisting with interviews, etc. INS officials could not tell us how many hours agents spent on the operation because agents do not record their time by operation, but rather by the type of work performed. FBI Counterterrorism Division officials said that both the FBI and INS participated in the physical arrest of the six individuals; however, they could not tell us the exact number of FBI agents present at the arrests. INS officials also told us the same thing; that both the FBI and INS participated in the apprehension of the six individuals. Specifically, seven INS agents, two INS supervisory special agents, and four INS deportation officers participated in the apprehension, according to the officials. We asked FBI and INS officials whether, as a result of the arrests, special agents were given awards, promotions, bonuses, etc. FBI Counterterrorism Division officials said two agents received $500 awards for their performance on this operation. INS officials told us no INS agents received awards for their participation in the operation. In addition to those named above, the following individuals contributed to this report: Tim Outlaw, Doris Page, Carolyn Ikeda, Alison Heafitz, Christine Davis, and Amy Bernstein.
The 21st Century Department of Justice Appropriations Authorization Act (P.L. 107-273) requires GAO to report on how investigation and arrest statistics are reported and used by federal law enforcement agencies. This report provides information on (1) the guidance and procedures followed by federal law enforcement agencies regarding counting investigations and arrests, (2) how investigation and arrest statistics are used, and (3) whether multiple agencies are counting and reporting the same investigations and arrests. GAO selected six agencies for review: the Drug Enforcement Administration (DEA), the Federal Bureau of Investigation (FBI), the former Immigration and Naturalization Service (INS), the U.S. Marshals Service, the former U.S. Customs Service, and the U.S. Postal Inspection Service (USPIS). Guidance and procedures for counting investigations, or cases, are generally consistent among the six agencies GAO reviewed. Agencies pursue investigations into crimes that have a nexus to their missions, such as drug trafficking for the DEA, mail theft for USPIS, and illegal aliens for the former INS. Once agents have made the decision to open a case, the cases are to be reviewed and approved by a supervisor, and details of the case are then entered into the agencies' case management tracking systems. GAO also found agency guidance and procedures for counting arrests to be generally consistent among all six agencies. In addition, the agencies required supervisory review of the justifications for the arrests before they were entered into the agencies' data tracking systems and officially counted. In general, agencies use investigation and arrest statistics as indicators of agency work and as output measures in performance plans, budget justifications, and testimonies. In some cases, these data are considered in making promotion, bonus, and award determinations. However, investigation and arrest statistics are not emphasized in any of these activities, but are one of many factors that are considered. ll of the agencies GAO reviewed counted the same investigations and arrests when more than one of them participated in the investigative and arresting activities. This practice seems appropriate because many investigations and arrests would not have occurred without the involvement and cooperation of all the agencies that participated. If agencies were not allowed to count investigations and arrests in which they participated, agencies would be less likely to work together, cases would be much smaller, and the desired disruption of high-level criminal organizations would be hampered. The Departments of Justice and Homeland Security, and USPIS reviewed a draft of this report and generally agreed with GAO's findings. Technical comments were incorporated as appropriate.
IRS has made noticeable progress in improving taxpayer service since passage of RRA 98. While progress has also been made in the tax law enforcement and BSM areas, however, serious ongoing issues have kept both on our high-risk list. IRS has made meaningful progress in four key taxpayer service areas; paper and electronic processing, telephone assistance, IRS’s Web site, and walk-in assistance. Table 1 shows IRS performance in these areas since 2002. While the progress is widespread, table 1 also shows that there are some areas of performance that merit attention, especially in light of current and proposed cuts to IRS’s taxpayer service budget. In fiscal year 2005 and in its proposed 2006 budget, IRS is shifting priorities by reducing taxpayer service and increasing resources for enforcement. As shown in table 1, electronic filing has increased while paper filing has dropped. The increase in electronic filing has allowed IRS to reduce the resources devoted to processing. As shown in figure 1, IRS reduced the staff devoted to processing paper returns between 1999 and 2004 by just over 1,100 staff years. The figure also shows that as the number of e-filed returns has increased, the number of staff years used to process those returns has not increased. The decline in paper processing staff allowed IRS to close its Brookhaven processing center in 2003. In addition, IRS is in the process of closing its paper processing operation in Memphis. In addition to saving IRS resources, electronic filing offers benefits to taxpayers in that it allows taxpayers to receive refunds faster and is less error prone. IRS employees manually transcribe paper tax return information into IRS’s computer systems, which can introduce errors. As shown in table 1, by several measures IRS’s telephone service has improved since 2002. One measure of access, the customer service representative (CSR) level of service (the percentage of taxpayers who attempted to reach CSRs and actually got through and received service) increased from 62 percent to 83 percent. Accuracy also showed some improvement; accounts accuracy (accuracy of answers to taxpayer questions about their accounts) exceeded 90 percent in 2005. However, taxpayers are waiting somewhat longer in 2005 to get answers than in 2002, 2003, and 2004. IRS’s Web site is performing well. A relatively recent addition to IRS’s menu of services, the Web site first became available during the 1996 filing season. We found it to be user friendly because it was readily accessible and easy to navigate. An independent weekly study ranked it in the top 4 out of 40 federal government web sites in terms of accessibility. The site is used extensively. In the early weeks of the 2005 filing season the IRS Web site was visited about 83 million times by users who viewed about 628 million pages and downloaded about 70.3 million forms and publications. IRS’s Web site continues to provide two very important tax service features: (1) “Where’s My Refund,” which enables taxpayers to check on the status of their refund and (2) Free File, which provides taxpayers the ability to file their tax return electronically for free. This filing season IRS provided new functionality for “Where’s My Refund” whereby taxpayers whose refunds could not be delivered by the Postal Service (i.e., returned as undeliverable mail), could change their addresses on the Web site. Taxpayer use of IRS’s walk-in sites has decreased while use of volunteer sites has increased. As shown in figure 2, IRS projects it will see about 3.4 million visits to its 400 walk-in sites this year, down from over 3.5 million in 2004 and about 4.3 million in 2001. Over the same period, IRS expects taxpayer visits to volunteer sites to increase to just over 2 million visits in 2005; a substantial increase over about 1.6 million visits in 2004 and fewer than 1 million in 2001. IRS continues to encourage taxpayers to use volunteer sites for return preparation. This shift is important because it transfers time-consuming services, particularly return preparation, to volunteers and allows IRS to concentrate on services that only it can provide, such as account assistance or compliance work. While it reduces the demand on IRS resources, the shift from IRS to volunteer sites has raised concerns about the quality of service provided. We and the Treasury Inspector General for Tax Administration (TIGTA) have called attention to the quality of service at both IRS walk-in and volunteer sites. IRS has separate quality initiatives under way at both IRS walk-in and volunteer sites, although data remain limited and cannot be compared to prior years. Another concern is post-filing service to taxpayers when IRS has undertaken compliance or collection actions. An example of this is the release of federal tax liens against taxpayers’ property. IRS is required to release a federal tax lien within 30 days after the date the tax liability is satisfied or has become legally unenforceable or the Secretary of the Treasury has accepted a bond for the assessed tax but, as have we reported for several years as part of our financial audits, most recently in November 2004, IRS has not always met this standard. We have long been concerned about tax noncompliance and IRS efforts to address it. Collection of unpaid taxes was included in our first high-risk series report in 1990, with a focus on the backlog of uncollected debts owed by taxpayers. In 1995, we added Filing Fraud as a separate high-risk area, narrowing the focus of that high-risk area in 2001 to Earned Income Credit Noncompliance because of the particularly high incidence of fraud and other forms of noncompliance in that program. We expanded our concern about the Collection of Unpaid Taxes in our 2001 high-risk report to include not only unpaid taxes (including tax evasion and unintentional noncompliance) known to IRS, but also the broader enforcement issue of unpaid taxes that IRS has not detected. In our high-risk update that we issued in January, we consolidated these areas into a single high-risk area—Enforcement of Tax Laws—because we believe the focus of concern on the enforcement of tax laws is not confined to any one segment of the taxpaying population or any single tax provision. Tax law enforcement is a high-risk area in part because of the size of the tax gap. IRS’s recent estimate of the difference between what taxpayers timely and accurately paid in taxes and what they owed ranged from $312 billion to $353 billion for tax year 2001. IRS estimates it will eventually recover some of this tax gap, resulting in a net tax gap from $257 billion to $298 billion. The tax gap arises when taxpayers fail to comply with the tax laws by underreporting tax liabilities on tax returns; underpaying taxes due from filed returns; or “nonfiling,” which refers to the failure to file a required tax return altogether or in a timely manner. Tax law enforcement is also high risk because past declines in IRS’s enforcement activities threatened to erode taxpayer compliance. In recent years, the resources IRS has been able to dedicate to enforcing the tax laws have declined. For example, the number of revenue agents (those who examine complex returns), revenue officers (those who perform field collection work), and special agents (those who perform criminal investigations) decreased over 21 percent from 1998 through 2003. However, IRS achieved some staffing gains in 2004 and expects modest gains in 2005. IRS’s proposal for fiscal year 2006, if funded and implemented as planned, would return enforcement staffing in these occupations to their highest levels since 1999. Concurrently, IRS’s enforcement workload—measured by the number of taxpayer returns filed—has continually increased. For example, from 1997 through 2003, the number of individual income tax returns filed increased by about 8 percent. Over the same period, returns for high-income individuals grew by about 81 percent. Due to their income levels, IRS believes that these individuals present a particular compliance risk. In light of declines in enforcement staffing and the increasing number of returns filed, nearly every indicator of IRS’s coverage of its enforcement workload has declined in recent years. Although in some cases workload coverage has begun to increase, overall IRS’s coverage of known workload is considerably lower than it was just a few years ago. Figure 3 shows the trend in examination rates—the proportion of tax returns that IRS examines each year—for field, correspondence, and total examinations since 1995. Field examinations involve face-to-face examinations and correspondence examinations are typically less comprehensive and complex, involving communication through written notices. IRS experienced steep declines in examination rates from 1995 to 1999, but the examination rate has slowly increased since 2000. However, as the figure shows, the increase in total examination rates of individual filers has been driven mostly by correspondence examinations, while more complex field examinations continue to decline. Further, IRS’s workload has grown ever more complex as the tax code has grown more complex. IRS is challenged to administer and explain each new provision, thus absorbing resources that otherwise might be used to enforce the tax laws. Concurrently, other areas of particularly serious noncompliance have gained the attention of IRS and the Congress, such as abusive tax shelters and schemes employed by businesses and wealthy individuals that often involve complex transactions that may span national boundaries. Given the broad declines in IRS’s enforcement workforce, IRS’s decreased ability to follow up on suspected noncompliance, and the emergence of sophisticated evasion concerns, IRS is challenged in attempting to ensure that taxpayers fulfill their obligations. On the collection front, IRS’s use of enforcement sanctions, such as liens, levies, and seizures, dropped precipitously during the mid and late 1990s. In fiscal year 2000, IRS’s use of these three sanctions was at 38 percent, 7 percent, and 1 percent, respectively, of fiscal year 1996 levels. However, beginning in fiscal year 2001, IRS’s use of liens and levies began to increase. By fiscal year 2004, IRS’s use of liens, levies, and seizures reached 71 percent, 65 percent, and 4 percent of 1996 levels, respectively. IRS is working to further improve its enforcement efforts. In addition to recent favorable trends in enforcement staffing, correspondence examinations, and the use of some enforcement sanctions, IRS has recently made progress with respect to abusive tax shelters through a number of initiatives and recent settlement offers that have resulted in billions of dollars in collected taxes, interest, and penalties. In addition, IRS is developing a centralized cost accounting system, in part to obtain better cost and benefit information on compliance activities, and is modernizing the technology that underpins many core business processes. It has also redesigned some compliance and collections processes and plans additional redesigns as technology improves. Finally, the recently completed National Research Program (NRP) study of individual taxpayers not only gives us a benchmark of the status of taxpayers’ compliance but also gives IRS a better basis to target its enforcement efforts. IRS has long relied on obsolete automated systems for key operational and financial management functions, and its attempts to modernize these aging computer systems span several decades. Modernization has encountered a long history of continuing delays and design difficulties and the impact of these problems on IRS’s operations led GAO to designate IRS’s systems modernization as a high-risk area in 1995 and it remains so today. IRS’s current modernization program, BSM, is a highly complex, multibillion-dollar program that is the agency’s latest attempt to modernize its systems. BSM is critical to supporting IRS’s taxpayer service and enforcement goals. For example, BSM includes projects to allow taxpayers to file and retrieve information electronically and to provide technology solutions to help reduce the backlog of collections cases. BSM is also important to allow IRS to provide the reliable and timely financial management information needed to account for the nation’s largest revenue stream and better enable the agency both to determine and to justify its resource allocation decisions and congressional budgetary requests. Over the past year, IRS has deployed initial phases of several modernized systems under its BSM program. The following provides examples of the systems and functionality that IRS implemented in 2004 and the beginning of 2005. Modernized e-File (MeF). This project is intended to provide electronic filing for large corporations, small businesses, and tax-exempt organizations. The initial releases of this project were implemented in June and December 2004, and allowed for the electronic filing of forms and schedules for the form 1120 (corporate tax return) and form 990 (tax-exempt organizations’ tax return). IRS reported that, during the 2004 filing season, it accepted over 53,000 of these forms and schedules using MeF. e-Services. This project created a Web portal and provided other electronic services to promote the goal of conducting most IRS transactions with taxpayers and tax practitioners electronically. IRS implemented e-Services in May 2004. According to IRS, as of late March 2005, over 84,000 users have registered with this Web portal. Customer Account Data Engine (CADE). CADE is intended to replace IRS’s antiquated system that contains the agency’s repository of taxpayer information and, therefore, is the BSM program’s linchpin and highest priority project. In July 2004 and January 2005, IRS implemented the initial releases of CADE, which have been used to process filing year 2004 and 2005 1040EZ returns, respectively, for single taxpayers with refund or even-balance returns. According to IRS, as of March 16, 2005, CADE had processed over 842,000 tax returns so far this filing season. Integrated Financial System (IFS). This system replaced aspects of IRS’s core financial systems and is ultimately intended to operate as its new accounting system of record. The first release of this system became fully operational in January 2005. In prior years, IRS deployed several systems, including (1) Customer Communications 2001, to improve telephone call management, call routing, and customer self-service applications; (2) Customer Relationship Management Examination, to provide off-the-shelf software to IRS revenue agents to allow them to accurately compute complex corporate transactions; and (3) Internet Refund/Fact of Filing, to improve taxpayer self-service by providing to taxpayers via the Internet instant refund status information and instructions for resolving refund problems. Although IRS is to be applauded for delivering important BSM functionality, the BSM program is far from complete. Future deliveries of additional functionality of deployed systems and the implementation of other BSM projects are expected to have a significant impact on IRS’s taxpayer services and enforcement capability as well as its efforts to continue to improve its financial management. For example, IRS has projected that CADE will process about 2 million returns in the 2005 filing season. However, the returns being processed in CADE are the most basic and constitute less than 1 percent of the total tax returns expected to be processed during the current filing season. IRS expects the full implementation of CADE to take several more years. Another BSM project—the Filing and Payment Compliance (F&PC) project—is expected to increase (1) IRS’s capacity to treat and resolve the backlog of delinquent taxpayer cases, (2) the closure of collection cases by 10 million annually by 2014, and (3) voluntary taxpayer compliance. As part of this project, IRS plans to deliver an initial limited private debt collection capability in January 2006, with full implementation of this aspect of the F&PC project to be delivered by January 2008 and additional functionality to follow in later years. Finally, full implementation of CADE, as well as the successful implementation of future releases of IFS and efforts to address the impact of IRS’s decision to discontinue the Custodial Accounting Project (CAP) will be critical to addressing many of IRS’s remaining and long-standing financial management issues. For IRS to build on the gains made since passage of RRA 98, the agency must address numerous challenges related to resource management. IRS faces budgetary constraints that may be addressed in part through the development of goals for assessing performance and to help in making budget decisions, looking for opportunities to enhance its funding, and leveraging the resources of nonfederal partners. IRS also faces the challenges of improving efficiency in taxpayer service and tax law enforcement, developing useful cost accounting tools, and improving productivity. Finally, IRS faces information systems challenges in both BSM and systems security shortfalls. For IRS, the Congress, and IRS’s other stakeholders, long-term goals can be used to assess performance and progress towards these goals, and determine whether budget decisions contribute to achieving those goals. Without long-term goals, the Congress and other stakeholders are hampered in evaluating whether IRS is making satisfactory long-term progress. Further, without such goals, the extent to which IRS’s 2006 budget request would help IRS achieve its mission over the long term is less clear. A recent Program Assessment Rating Tool (PART) review conducted by the Office of Management and Budget (OMB) reported that IRS lacks long- term goals. As a result, IRS has been working to identify and establish long-term goals for all aspects of its operations for over a year. IRS officials said these goals will be finalized and provided publicly as an update to the agency’s strategic plan in the near future. Long-term goals and results measurement are a component of the statutory strategic planning and management framework that the Congress adopted in the Government Performance and Results Act of 1993. As a part of this comprehensive framework, long-term goals that are linked to annual performance measures can help guide agencies when considering organizational changes and making resource decisions. For example, long- term goals would provide IRS with a framework for assessing budgetary tradeoffs between taxpayer service and enforcement and whether IRS is making satisfactory progress towards achieving those goals. Similarly, long-term goals could help identify priorities within the taxpayer service functions (e.g., if the budget for taxpayer service were to be cut and efficiency gains did not offset the cut, long-term goals could help guide decisions about whether to make service cuts across a broad or target selected services). Perhaps most important, long-term compliance goals coupled with periodic measurement of compliance levels would provide IRS with a better basis for determining to what extent its various day-to-day service and enforcement efforts contribute to compliance in the long run. Furthermore, long-term, quantitative goals may help IRS consider new strategies to improve compliance, especially since these strategies could take several years to implement. For example, IRS’s progress toward the goal of having 80 percent of all individual tax returns electronically filed by 2007 has required enhancement of its technology, development of software to support electronic filing, education of taxpayers and practitioners, and other steps that could not be completed in a short time frame. Focusing on intended results can also promote strategic and disciplined management decisions that are more likely to be effective because managers who use fact-based performance analysis are better able to target areas most in need of improvement and select appropriate interventions. Identifying potential new sources of funds could be an opportunity for helping to mitigate IRS’s budget constraints. Current examples of resource enhancers—user fees and private debt collection—may provide useful models for IRS and Congress to consider. User fees are collected from identifiable recipients of special benefits beyond those accruing to the general public. In 2004, IRS collected over $137 million in user fees for a wide range of services, including installment agreements, offers in compromise, and Freedom of Information Act (FOIA) requests. In fiscal year 2004, about 82 percent of all user fees collected by IRS were for installment agreements or Employee Plans and Exempt Organizations letter rulings and determination letters. The 1995 Treasury Appropriation Act specifies that IRS can keep a maximum of $119 million per year of the user fees it collects, with the rest of the user fees going into the Treasury general fund. In 2004, IRS retained about $90 million from the user fees collected (see table 2). In comparison, IRS’s total spending in 2004 was $10.7 billion. In setting certain user fees, IRS must follow Internal Revenue Code (IRC) Section 7528, which authorizes user fees for letter rulings, opinion letters, determination letters, and similar requests. IRC Section 7528 requires that user fees (1) vary according to categories or subcategories, (2) take into account the average time and difficulty of requests by categories or subcategories, (3) be payable in advance, and (4) be subject to appropriate exemptions and reduced fees within limits specified by Section 7528. IRS is precluded from expending any fees collected pursuant to IRC Section 7528 unless provided by an appropriations act. As mentioned earlier, the 1995 Treasury Appropriation Act specifies that IRS can keep a maximum of $119 million per year in user fee collections. OMB Circular A-25, User Charges, establishes general federal policy for user fees assessed for government services by executive branch agencies. A-25 requirements include (1) identifying services and activities that convey special benefits; (2) determining their full cost or market price, as appropriate; (3) biennial reviews of user fees for unanticipated cost or market price changes; and (4) biennial reviews of agency programs not subject to user fees to determine if such fees should be assessed. Private debt collection provides another example of a revenue enhancement model that may be useful for IRS. The 2004 American Jobs Creation Act permitted IRS to contract with private collection agencies (PCA) to collect some federal tax debts and allows IRS to keep a portion of the funds collected by PCAs. PCAs will not replace IRS’s own collection resources, but will handle cases that do not require enforcement action or discretion in resolving tax liabilities. According to IRS, the private debt collection program will help reduce the significant and growing amount of uncollectable cases that are not currently collected, and enable IRS to focus existing resources to address more difficult cases. IRS will begin a limited implementation phase of the private debt collection in 2005, and full implementation is planned for 2007. The law allows IRS to retain and use up to 25 percent of any amounts collected to pay for collection services and IRS collection enforcement activities. IRS expects to retain $10 million of PCA collections in fiscal year 2007 and more in later years. IRS has leveraged nonfederal resources to make improvements to taxpayer service and tax law enforcement. The examples below highlight the variety of such leveraging and could provide a basis for exploring whether additional such opportunities exist. One example involving taxpayer service is the Free File Alliance. In 2003 IRS entered into a 3-year agreement with the Free File Alliance, a consortium of tax preparation companies that provides free electronic filing to taxpayers who access any of the companies via a link on IRS’s Web site. IRS has benefited from this partnership because it encourages electronic filing of tax returns. For example, as of March 16, 2005, 3.6 million tax returns had been filed via Free File, which represents a 44 percent increase over the same time period last year. IRS has also established partnerships with states and several cities to assist in combating abusive tax schemes. In September 2003, IRS announced the establishment of a nationwide partnership to combat abusive tax avoidance. Under agreements with individual states, IRS shares information on abusive tax avoidance transactions and those taxpayers who participate in them. The agreements creating this partnership were designed to enable States and IRS to move more aggressively in addressing this tax compliance problem. The partnership also includes joint public outreach activities to more effectively counter the claims of those marketing tax schemes. Another example of IRS’s effort to leverage nonfederal resources is the over 13,500 volunteer sites run by community-based coalitions. IRS awards grants, trains and certifies volunteers, and provides reference materials, computer software and, in some cases, computers to these volunteer organizations to assist primarily low-income and elderly taxpayers prepare their returns. Since 2001, the number of taxpayers seeking return preparation assistance at volunteer sites has increased an average of 19 percent per year. During the 2004 filing season, taxpayers had over five times more returns prepared at volunteer sites than at IRS walk-in sites. This trend reflects IRS’s strategy to shift return preparation to sites staffed by volunteer and community-based coalitions that are overseen by IRS. IRS has encouraged the shift by advertising the locations of these sites. As we noted earlier, the shift of taxpayers from walk-in to volunteer sites is important because it has transferred time-consuming services, particularly return preparation, from IRS to volunteer sites and allowed IRS to concentrate on services that only it can provide, such as account assistance or compliance work. However, as we also noted earlier, there have been concerns raised about the quality of service at both walk-in and volunteer sites. In addition, in her January 2005 report, the Taxpayer Advocate expressed concern about the reduction of face-to-face services, such as those offered at walk-in sites. She stated that IRS’s plan does not adequately provide for the segment of the population that continues to rely on the interaction provided by walk-in sites. Better data about the quality of service at volunteer sites would provide a baseline for making decisions about how to better manage quality. For at least two reasons, this is an opportune time to review the menu of taxpayer services that IRS provides. First, IRS’s budget for taxpayer services was reduced in 2005 and an additional reduction is proposed for 2006. These reductions have forced IRS to propose scaling back some services, including the hours of telephone contact availability. Second, as we have reported, IRS has made significant progress in improving the quality of its taxpayer services. For example, IRS now provides many Internet services that did not exist a few years ago, and has noticeably improved the quality of telephone services. This opens up the possibility of maintaining the overall level of taxpayer service but with a different menu of service choices. Cuts in selected services could be offset by the new and improved services. Generally, as indicated in the budget, the menu of taxpayer services that IRS provides covers assistance, outreach, and processing. Assistance includes answering taxpayer questions via telephone, correspondence, and face to face at its walk-in sites. Outreach includes educational programs and the development of partnerships. Processing includes issuing millions of tax refunds. When considering program reductions, we support a targeted approach rather than across-the-board cuts. A targeted approach helps reduce the risk that effective programs are reduced or eliminated while ineffective or lower priority programs are maintained. With the above reasons in mind for reconsidering IRS’s menu of services, we have compiled a list of options for targeted reductions in taxpayer service. The options on this list are not recommendations, but are intended to contribute to a dialogue about the tradeoffs faced when setting IRS’s budget. The options presented meet at least one of the following criteria that we generally use to evaluate programs or budget requests. These criteria include that the activity: duplicates other efforts that may be more effective and/or efficient; historically does not meet performance goals or provide intended results as reported by GAO, TIGTA, IRS, or others; experiences a continued decrease in demand; lacks adequate oversight, implementation and management plans, or structures and systems to be implemented effectively; has been the subject of actual or requested funding increases that cannot be adequately justified; or has the potential to make an agency more self-sustaining by charging user fees for services provided. We recognize that the options listed below involve tradeoffs. In each case, some taxpayers would lose a service they use. However, the savings could be used to help maintain the quality of other services. We also want to give IRS credit for identifying savings, including some on this list. The options include the following: Closing walk-in sites. As discussed previously, taxpayer demand for walk- in services has continued to decrease and staff answer a more limited number of tax law questions in person than staff answer via telephone. Limiting the type of telephone questions answered by IRS assistors. IRS assistors still answer some refund status questions even though IRS provides automated answers via telephone and its Web site. Mandating electronic filing for some filers such as paid preparers or businesses. As noted, efficiency gains from electronic filing have enabled IRS to consolidate paper processing operations. Charging for services. For example, IRS provides paid preparers with information on federal debts owed by taxpayers seeking refund anticipation loans. Multiple enforcement strategies could help IRS reduce the tax gap. Given its size, even small or moderate reductions in the net tax gap could yield substantial returns. For example, based on IRS’s most recent estimate, a 1 percent reduction in the net tax gap would likely yield more than $2.5 billion annually. Although reducing the tax gap may be an attractive means to improve the nation’s fiscal position, achieving this end will be a challenging task given persistent levels of noncompliance. IRS has made efforts to reduce the tax gap since the early 1980s; yet the tax gap is still large—although without these efforts it could be even larger. Also, IRS is challenged in reducing the tax gap because the tax gap is spread across the five different types of taxes that IRS administers, and a substantial portion of the tax gap is attributed to taxpayers who are not subject to withholding or information reporting requirements. Moreover, as we have reported in the past, closing the entire tax gap may not be feasible or desirable, as it could entail more intrusive recordkeeping or reporting than the public is willing to accept or more resources than IRS is able to commit. Although much of the tax gap that IRS currently recovers is through enforcement actions, a sole focus on enforcement will not likely be sufficient to further reduce the net tax gap. Rather, the tax gap must be attacked on multiple fronts and with multiple strategies on a sustained basis. For example, efforts to simplify the tax code and otherwise alter current tax policies may help reduce the tax gap by making it easier for individuals and business to understand and voluntarily comply with their tax obligations. For instance, reducing the multiple tax preferences for retirement savings or education assistance might ease taxpayers’ burden in understanding and complying with the rules associated with these options. Also, simplification may reduce opportunities for tax evasion through vehicles such as abusive tax shelters. For any given set of tax policies, IRS’s efforts to reduce the tax gap and ensure appropriate levels of compliance will need to be based on a balanced approach of providing service to taxpayers and enforcing the tax laws. Furthermore, providing quality services to taxpayers is an important part of any overall strategy to improve compliance and thereby reduce the tax gap. As we have reported in the past, one method of improving compliance through service is to educate taxpayers about confusing or commonly misunderstood tax requirements. For example, if the forms and instructions taxpayers use to prepare their taxes are not clear, taxpayers may be confused and make unintentional errors. One method to ensure that forms and instructions are sufficiently clear is to test them before use. However, we reported in 2003 that IRS had tested revisions to only five individual forms and instructions from July 1997 through June 2002, although hundreds of forms and instructions had been revised in 2001 alone. Finally, in terms of enforcement, IRS will need to use multiple strategies and techniques to find noncompliant taxpayers and bring them into compliance. One pair of tools has been shown to lead to high levels of compliance: withholding tax from payments to taxpayers and having third parties report information to IRS and the taxpayers on income paid to taxpayers. For example, banks and other financial institutions provide information returns (Forms 1099) to account holders and IRS showing the taxpayers’ annual income from some types of investments. Similarly, most wages, salaries, and tip compensation are reported by employers to employees and IRS through Form W-2. Preliminary findings from NRP indicate that more than 98.5 percent of these types of income are accurately reported on individual returns. Regularly measuring compliance can offer many benefits, including helping IRS identify new or major types of noncompliance, identify changes in tax laws and regulations that may improve compliance, more effectively target examinations of tax returns or other enforcement programs, understand the effectiveness of its programs to promote and enforce compliance, and determine its resource needs and allocations. For example, by analyzing 1979 and 1982 compliance research data, IRS identified significant noncompliance with the number of dependents claimed on tax returns and justified a legislative change to address the noncompliance. As a result, for tax year 1987, taxpayers claimed about 5 million fewer dependents on their returns than would have been expected without the change in law. In addition, tax compliance data are useful outside of IRS for tax policy analysis, revenue estimating, and research. IRS research officials have proposed a compliance measurement study that will allow IRS to update underreporting estimates involving flow-through entities. This study, which IRS intends to begin in fiscal year 2006, would take 2 to 3 years to complete. Because either individual taxpayers or corporations may be recipients of income (or losses) from flow-through entities, this study could affect IRS’s estimates for the underreporting gap for individual and corporate income taxes. While these data and methodology updates could improve the tax gap estimates, IRS has no documented plans to periodically collect more or better compliance data over the long term. Other than the proposed study of flow-through entities, IRS does not have plans to collect compliance data for other segments of the tax gap. Also, IRS has indicated that given its current research priorities, it would not begin another NRP study of individual income tax returns before 2008, if at all, and would not complete such a study until at least 2010. When IRS initially proposed the NRP study, it had planned to study individual income tax underreporting on a 3-year cycle. According to IRS officials, IRS has not committed to regularly collecting compliance data because of the associated costs and burdens. Taxpayers whose returns are examined through compliance studies such as NRP bear costs in terms of time and money. Also, IRS incurs costs, including direct costs and opportunity costs—revenue that IRS potentially forgoes by using its resources to examine randomly selected returns, which may include returns from compliant taxpayers, as opposed to traditional examinations that focus on taxpayer returns that likely contain noncompliance and may more consistently produce additional tax assessments. Although the costs and burdens of compliance measurement are legitimate concerns, as we have reported in the past, we believe compliance studies to be good investments. Without current compliance data, IRS is less able to determine key areas of noncompliance to address and actions to take to maximize the use of its limited resources. The lack of firm plans to continually obtain fresh compliance data is troubling because the frequency of data collection can have a large impact on the quality and utility of compliance data. As we have reported in the past, the longer the time between compliance measurement surveys, the less useful they become given changes in the economy and tax law. In designing its recently completed NRP study, IRS balanced the costs, burdens, and compliance risk of studying that area of the tax gap. Any plans for obtaining and maintaining reasonably current information on compliance levels for all portions of the tax gap would similarly need to take into account costs, burdens, and compliance risks in determining which areas of compliance to measure and the scope and frequency of such measurement. The NRP survey had an added benefit of including the use of casebuilding to aid examiners in determining whether IRS needs to have any contact with taxpayers to verify the accuracy of information reported on their tax returns. The casebuilding tools consisted of data from both IRS and third- party sources. IRS’s NRP casebuilding included return information from the prior 3 years, audit history, payment and filing history, information return data reported by third parties (banks, lending institutions, and others), and bank reports on large cash transactions. NRP casebuilding tools also included data from third-party sources, such as external public database containing real estate and other asset ownership information (e.g., motor vehicle registrations and ownership of luxury items like watercraft and aircraft). Another third-party data source was the Dependent Data Base, which is a combination of Department of Health and Human Services and Social Security Administration data. These data were used to provide custody information that can be used to help determine the validity of dependent and Earned Income Tax Credit (EITC) claims. Use of these data helped IRS enforcement staff to rule out compliance issues that could be verified without contacting taxpayers. As IRS moves to further strengthen enforcement and introduce enforcement initiatives, one management challenge will be coordinating across IRS programs and offices. An initiative that identifies noncompliance has resource implications for downstream activities such as collections, criminal investigations, and appeals. Without appropriate, coordinated follow-up, compliance initiatives run the risk of becoming toothless. IRS has experienced this sort of imbalance in the past. For example, in 2002 we reported on the growing backlog of collections cases generated by the upstream exam and assessment functions that the downstream collections function lacked the capacity to pursue. Managing a federal agency as large and complex as IRS requires managers to constantly weigh the relative costs and benefits of different approaches to achieving the goals mandated by the Congress. Management is constantly called upon to make important long-term strategic as well as daily operational decisions about how to make the most effective use of the limited resources at its disposal. As constraints on available resources increase, these decisions become correspondingly more challenging and important. In order to rise to this challenge, management needs to have at its disposal current and accurate information upon which to base its decisions, and to enable it to monitor the effectiveness of actions taken over time so that appropriate adjustments can be made as conditions change. However, in its ongoing effort to make such increasingly difficult resource allocation decisions and defend those decisions before the Congress, IRS management has long been hampered by a lack of current and accurate information concerning the costs of the various options being considered. This has impaired management’s ability to properly decide which, if any, of the options at hand are worth the cost relative to the expected benefits. For example, accurate and timely cost information may help IRS consider changes in the menu of taxpayer services that it provides by identifying and assessing the relative costs, benefits, and risks of specific projects. Without reliable cost information, IRS’s ability to make such difficult choices in an informed, reasoned manner is seriously impaired. Similarly, IRS should periodically reassess the prices it charges taxpayers in user fees for various services, such as entering into installment agreements and making determinations about the tax exemption status of certain organizations. The cost of providing such services is supposed to be a major factor in setting the related fees. However, without timely and reliable cost information, the basis for the fees becomes problematic. The lack of reliable cost information also means that IRS cannot prepare cost-based performance measures to assist in measuring the effectiveness of its programs over time. IRS lacks reliable and timely cost information because prior to fiscal year 2005, it did not have a cost accounting system to accumulate and report the reliable cost information that managers needed to support informed decision making. Instead, management often relied on a combination of the limited existing cost information; the results of special analysis initiated to establish the full cost of a specific, narrowly defined task or item; and estimates based on the best judgment of experienced staff. In fiscal year 2005, IRS implemented a cost accounting module as part of the first release of its IFS. However, while this module has much potential and has begun accumulating cost information, management has not yet determined what the full range of its cost information needs are or how best to tailor the capabilities of this module to serve those needs. IRS has also not yet implemented a related workload management system intended to provide the cost module with detailed personnel cost information. In addition, because it generally takes several years of historical cost information to support meaningful estimates and projections, IRS cannot yet rely on this system as a significant planning tool. It will likely require several years and implementation of additional components of IFS before the full potential of IRS’s cost accounting module will be realized. In the interim, IRS decision making will continue to be hampered by inadequate underlying cost information. IRS needs to make the most use of its available resources and a key to this is improved productivity. Productivity is defined as the efficiency with which inputs are used to produce outputs. It is measured as the ratio of outputs to inputs. Productivity and cost are inversely related—as productivity increases, average costs decrease. Consequently, information about productivity can inform budget debates as a factor that explains the level or changes in the cost of carrying out different types of activities. Improvements in productivity either allow more of an activity to be carried out at the same cost or the same level of activity to be carried out at a lower cost. Sound productivity data are an important element of meaningful productivity improvement efforts. As part of our review of IRS process improvement initiatives, private sector executives we met with stressed the benefits of productivity analysis. They said that an inadequate understanding of productivity makes it harder to distinguish processes with a potential for improvement from those without such potential. GAO’s Business Process Reengineering Assessment Guide also highlighted the importance of being able to identify processes that are in greatest need of improvement. Opportunities exist to improve enforcement productivity data and give IRS managers a more informed basis for decisions on how to make improvements. Statistical methods that are widely used in both the public and private sectors can be used to adjust productivity measures for quality and complexity. In particular, by using these methods, managers can distinguish productivity changes that represent real efficiency gains or losses from those that are due to changes in quality standards. These methods could be implemented using data currently available at IRS. The cost of implementation would be chiefly the staff time required to adapt the statistical models. Although the computations are complex, the methods can be implemented using existing software. We currently have under way a separate study that illustrates how these methods can be used to create better productivity measures at IRS. The BSM program has a long history of significant cost increases and schedule delays, which, in part, has led us to report this program as high risk since 1995. In January 2005 letters to congressional appropriation committees, IRS stated that it had showed a marked improvement in significantly reducing its cost variances. In particular, IRS claimed that it reduced the variance between estimated and actual costs from 33 percent in fiscal year 2002 to 4 percent in fiscal year 2004. However, we do not agree with the methodology used in the analysis supporting this claim. Specifically, (1) the analysis did not reflect actual costs, but instead reflected changes in cost estimates (i.e., budget allocations) for various BSM projects; (2) IRS aggregated all of the changes in the estimates associated with the major activities for some projects, such as CADE, which masked that monies were shifted from future activities to cover increased costs of current activities; and (3) the calculations were based on a percentage of specific fiscal year appropriations, which does not reflect that these are multiyear projects. In February 2002 we expressed concern over IRS’s cost and schedule estimating and made a recommendation for improvement. IRS and its prime systems integration support (PRIME) contractor have taken action to improve their estimating practices, such as developing a cost and schedule estimation guidebook and developing a risk-adjustment model to include an analysis of uncertainty. These actions may ultimately result in more realistic cost and schedule estimates, but our analysis of IRS’s expenditure plans over the last few years shows continued increases in estimated project life-cycle costs (see fig. 4). The Assistant Chief Information Officer (CIO) for BSM stated that IRS’s cost and schedule estimating has improved in the past year. Our comparison of IRS’s reported project costs and milestone completion dates presented in the July 2004 and April 2005 expenditure plans shows that two BSM projects, CADE Releases 1.1 and 1.2, were delivered at the estimated cost and on or before the scheduled completion dates projected in the July 2004 expenditure plan. It is important to note that this recent success is based on project cost and schedule estimates that were re- baselined in the second quarter of fiscal year 2004 with delivery dates in late fiscal year 2004 and early fiscal year 2005. It is too early to tell whether this signals a fundamental improvement in IRS’s ability to accurately forecast project costs and schedules. The reasons for IRS’s cost increases and schedule delays vary. However, we have previously reported that they are due, in part, to weaknesses in management controls and capabilities. We have previously made recommendations to improve BSM management controls, and IRS has implemented or begun to implement these recommendations. For example, in February 2002, we reported that IRS had not yet defined or implemented an information technology human capital strategy, and recommended that IRS develop plans for obtaining, developing, and retaining requisite human capital resources. In August 2004, the current Associate CIO for BSM identified the completion of a human capital strategy as a high priority. Among the activities that IRS is in the process of implementing are prioritizing its BSM staffing needs and developing a recruiting plan. IRS has also identified, and is in the process of addressing, other major management challenges. For example, poorly defined requirements have been among the significant weaknesses that have been identified as contributing to project cost overruns and schedule delays. As part of addressing this problem, in March 2005, the IRS BSM office established a requirements management office, although a leader has not yet been hired. The BSM program is undergoing significant changes as it adjusts to reductions in its budget. Figure 5 illustrates the BSM program’s requested and enacted budgets for fiscal years 2004 through 2006. For fiscal year 2005, IRS received about 29 percent less funding than it requested (from $285 million to $203.4 million). According to the Senate report for the fiscal year 2005 Transportation, Treasury, and General Government appropriations bill, in making its recommendation to reduce BSM funding, the Senate appropriations committee was concerned about the program’s cost overruns and schedule delays. In addition, the committee emphasized that in providing fewer funds, it wanted IRS to focus on its highest priority projects, particularly CADE. In addition, IRS’s fiscal year 2006 budget request reflects an additional reduction of about 2 percent, or about $4.4 million, from the fiscal year 2005 appropriation. It is too early to tell what effect the budget reductions will ultimately have on the BSM program. However, the significant adjustments that IRS is making to the program to address these reductions are not without risk, could potentially impact future budget requests, and will delay the implementation of certain functionality that was intended to provide benefit to IRS operations and the taxpayer. For example, Reductions in management reserve/project risk adjustments. In response to the fiscal year 2005 budget reduction, IRS reduced the amount that it had allotted to program management reserve and project risk adjustments by about 62 percent (from about $49.1 million to about $18.6 million). If BSM projects have future cost overruns that cannot be covered by the depleted reserve, this reduction could result in (1) increased budget requests in future years or (2) delays in planned future activities (e.g., delays in delivering promised functionality) to use those allocated funds to cover the overruns. Shifts of BSM management responsibility from the PRIME contractor to IRS. Due to budget reductions and IRS’s assessment of the PRIME contractor’s performance, IRS decided to shift significant BSM responsibilities for program management, systems engineering, and business integration from the PRIME contractor to IRS staff. For example, IRS staff are assuming responsibility for cost and schedule estimation and measurement, risk management, integration test and deployment, and transition management. There are risks associated with this decision. To successfully accomplish this transfer, IRS must have the management capability to perform this role. Although the BSM program office has been attempting to improve this capability through, for example, implementation of a new governance structure and hiring staff with specific technical and management expertise, IRS has had significant problems in the past managing this and other large development projects, and acknowledges that it has major challenges to overcome in this area. Suspension of the Custodial Accounting Project (CAP). Although the initial release of CAP went into production in September 2004, IRS has decided not to use this system and to stop work on planned improvements due to budget constraints. According to IRS, it made this decision after it evaluated the business benefits and costs to develop and maintain CAP versus the benefits expected to be provided by other projects, such as CADE. Among the functionalities that the initial releases of CAP were expected to provide were (1) critical control and reporting capabilities mandated by federal financial management laws; (2) a traceable audit trail to support financial reporting; and (3) a subsidiary ledger to accurately and promptly identify, classify, track, and report custodial revenue transactions and unpaid assessments. With the suspension of CAP, it is now unclear how IRS plans to replace the functionality this system was expected to provide, which was intended to allow the agency to make meaningful progress toward addressing long-standing financial management weaknesses. IRS is currently evaluating alternative approaches to addressing these weaknesses. Reductions in planned functionality. According to IRS, the fiscal year 2006 funding reduction will result in delays in planned functionality for some of its BSM projects. For example, IRS no longer plans to include form 1041 (the income tax return for estates and trusts) in the fourth release of Modernized e-File, which is expected to be implemented in fiscal year 2007. The BSM program is based on visions and strategies developed in 2000 and 2001. The age of these plans, in conjunction with the significant delays already experienced by the program and the substantive changes brought on by budget reductions, indicates that it is time for IRS to revisit its long- term goals, strategy, and plans for BSM. As we have previously reported, such an assessment would include an evaluation of when significant future BSM functionality would be delivered. IRS’s Associate CIO for BSM has recognized that it is time to recast the agency’s BSM strategy because of changes that have occurred subsequent to the development of the program’s initial plans. According to this official, IRS is in the process of redefining and refocusing the BSM program, and he expects this effort to be completed by the end of this fiscal year. However, clear milestones for completing these activities have not been defined and we plan to address this in our ongoing 2005 BSM expenditure plan review for the appropriations committees. Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies where maintaining the public’s trust is essential. In December 2002, the Congress enacted the Federal Information Security Management Act of 2002 (FISMA) to strengthen security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide information security for the information and systems that support the operations and assets of the agency. IRS relies extensively on interconnected information systems to perform vital functions, such as collecting and storing taxpayer data, calculating interest and penalties, and generating refunds. In addition to processing its own financial and tax information, IRS provides information processing support to the Financial Crimes Enforcement Network (FinCEN), a Treasury bureau responsible for administering and enforcing the Bank Secrecy Act (BSA) and its implementing provisions. While IRS has made progress in correcting or mitigating previously reported information security control weaknesses, serious control weaknesses continue to exist over key financial and tax processing information systems. For example, during our review of information security at IRS facilities in 2004, we determined that IRS corrected or mitigated 32 of the 53 weaknesses that we reported as unresolved at the time of our last review in 2002. In addition to the 21 previously reported weaknesses that remained uncorrected, we identified 39 new information security control weaknesses during this review that placed sensitive taxpayer and BSA data—including information related to financial crimes, terrorist financing, money laundering, and other illicit activities—at significant risk of unauthorized disclosure, modification, and destruction. These include the following: Access controls over the mainframe computing environment provided no logical separation between IRS’s taxpayer data and FinCEN’s BSA data, allowing all 7460 mainframe users—IRS employees, non-IRS employees, and contractors—regardless of their official duties, the ability to read and modify taxpayer and BSA data, including information about citizens, law enforcement personnel, and individuals subject to investigation. Thus, IRS users could read or copy BSA information, and law enforcement users could read or copy taxpayer information. User accounts and passwords were not adequately controlled to ensure that only authorized individuals had access to IRS’s servers and networks, thereby increasing the risk that unauthorized users could gain authorized user ID and password combinations to claim a user identity and then use that identity to gain access to sensitive taxpayer or BSA data. Audit and monitoring of security-related events on IRS’s servers suffered from insufficient retention of security logs, heightening the risk of unauthorized system activity going undetected. Security over access to sensitive areas was jeopardized due to the lack of accountability over the issuance of master keys at an IRS facility, thereby increasing the likelihood that an unauthorized person could gain possession of a master key and use it to unlock sensitive computing areas within the facility. These information security control weaknesses exist primarily because IRS has not fully implemented an agencywide information security program to effectively protect the information and information systems that support the operations and assets of the agency. Consequently, these identified weaknesses in information security controls impair IRS’s ability to ensure the confidentiality, integrity, and availability of sensitive financial, taxpayer and FinCEN’s BSA data hosted at its facility. We made recommendations to the Secretary of the Treasury to direct the IRS Commissioner to take several actions to fully implement a comprehensive agencywide information security program and to determine whether taxpayer data have been disclosed to unauthorized individuals. In addition, we recommended that the Secretary of the Treasury direct the FinCEN Director to perform an assessment to determine whether BSA data have been disclosed to unauthorized individuals. The Acting Deputy Secretary of the Treasury generally agreed with the recommendations and identified specific completed and planned corrective actions, which we did not verify. IRS is operating in a difficult budget environment. On the one hand, its workload—represented by the number of returns and the complexity of those returns—is growing. On the other hand, IRS faces pressure to hold down spending. Addressing the resource challenges summarized in this statement can help policy makers assessing IRS’s budget. Long-term goals can help determine overall budgetary requirements. Revenue enhancements and the leveraging of nonfederal resources can help, to some extent, meet those requirements. Productivity gains and successful new investments in systems can help ensure that existing resources are used as efficiently as possible, helping minimize the need for additional funding. Addressing these resource challenges does not promise a painless way out of difficult budget decisions. However, it could provide a clearer picture of the tradeoffs involved. Mr. Chairman, this concludes my testimony. I would be happy to answer any questions you may have at this time. For further information on this testimony, please contact James White on (202) 512-9110 or whitej@gao.gov. Individuals making key contributions to this testimony include Perry Datwyler, George Guttman, Tonia Johnson, David Lewis, Neil Pinney, Jeffrey Schmerling, Henry Sutanto, and Jenniffer Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the passage of the IRS Restructuring and Reform Act of 1998 (RRA 98), the Internal Revenue Service (IRS) has faced the challenge of managing its resources to simultaneously improve service to taxpayers, assure taxpayers' compliance with the tax laws, and modernize its antiquated information systems. As requested, this statement provides our assessment of IRS's current performance in the areas of taxpayer service, tax law enforcement, and systems modernization. Looking ahead, this statement also describes the challenges that IRS faces in addressing resource constraints as well as realizing efficiency and information systems improvements. IRS's most noticeable progress has been in IRS's taxpayer service, which has been of special concern to the Congress. Since the passage of RRA 98, improvements in access to IRS by telephone, the accuracy of answers given to taxpayer inquiries, and the growth of IRS's Web site, which now provides a variety of services, have been noteworthy accomplishments. IRS experienced declines in enforcement staffing after 1998, but recently stopped the declines and begun to show increases. Despite this, enforcement remains a high risk area because of the continued need to improve enforcement and make progress towards reducing the tax gap. IRS has made significant progress in establishing management controls and acquiring infrastructure as part of the BSM program, as well as significant progress in addressing financial management issues. However, BSM remains at risk because of the scope and complexity of modernization activities and the need for better management capacity to avoid repeating the program's history of schedule delays and cost overruns. Looking ahead, continuing the progress described above depends on IRS addressing resource constraints and realizing efficiency and systems improvements. We highlight several such opportunities: (1) developing long-term goals would help IRS and Congress assess agency performance and make budget decisions, (2) considering additional funding enhancements such as user fees and private debt collection which may help mitigate budget constraints, (3) leveraging nonfederal partners such as states to assist with tax law enforcement and volunteers to help provide taxpayer service, (4) prioritizing taxpayer service activities could help IRS minimize the impact of budget cuts, (5) targeting enforcement resources could help IRS make more efficient use of available resources and help the agency make progress towards reducing the tax gap, (6) creating the necessary systems to enable IRS to develop accurate cost accounting information would help IRS make resource allocation decisions, (7) developing and using better productivity data would help IRS make productivity improvements and thereby make better use of available resources, (8) making needed management improvements would help IRS bring planned new information systems on-line in a timely and cost-effective manner, and (9) making needed improvements to assure information systems security would reduce vulnerabilities.
The federal and Commonwealth governments have had a long-term interest in policies to stimulate economic growth in Puerto Rico. Historically, the centerpiece of these policies has been the combination of the possessions tax credit in the U.S. Internal Revenue Code (IRC) and extensive tax incentives in the Puerto Rican tax code for U.S. and foreign businesses. In the early 1990s Congress became dissatisfied with the effectiveness of the credit and introduced restrictions to better target employment-generating activities. Then in 1996 Congress repealed the credit but allowed existing possessions corporations to earn either the possessions credit or a replacement credit during a 10-year phaseout period ending in 2006. Various proposals have been placed before Congress for some form of replacement assistance to the Puerto Rican economy. Congress could better assess the merits of the various proposals if it had more complete information relating to the recent performance of the Puerto Rican economy, the current treatment that Commonwealth residents receive under both federal tax policies and federal social programs, and information relating to the burden of taxes that residents of Puerto Rico pay, relative to those paid by residents of the states and the other U.S. insular areas. To provide a basis for future decisions regarding legislation on Puerto Rican economic issues, this report explains how the U.S. federal tax treatment of individuals and businesses in Puerto Rico and of the insular government differs relative to the treatment of governments, businesses, and individuals in the states and the other U.S. insular areas; compares trends in Puerto Rico’s principal economic indicators since the early 1980s with similar indicators at the national level for the United States and provides what is known about capital flows between Puerto Rico and the United States and between Puerto Rico and foreign countries; reports on changes in the activities and tax status of the corporations that have claimed the possessions tax credit since 1993; provides information on the distribution of private-sector economic activity in Puerto Rico by type of business entity; describes the total amount of tax paid by individuals and businesses in the states and the U.S. insular areas and shows percentage breakdowns by type of tax; and describes how the principal U.S. federal social programs apply to Puerto Rican residents, relative to residents of the states and the other U.S. insular areas. Puerto Rico is one of the two nonstate Commonwealths associated with the United States. The other is the Commonwealth of the Northern Mariana Islands (CNMI). The United States also has three major territories under the jurisdiction of the U.S. Department of Interior. The major territories are Guam, the U.S. Virgin Islands, and American Samoa. The three major territories plus the two nonstate Commonwealths are referred to in this report as “the insular areas.” These areas are often grouped together in this manner for the purpose of federal legislation. For this reason, and when necessary for the purpose of comparison to Puerto Rico, this report provides a limited discussion on the other insular areas. With the exception of American Samoa, those born in the insular areas are U.S. citizens; however, insular area residents are not afforded all of the rights of citizens residing in the states. More than four million U.S. citizens and nationals live in the insular areas. These areas vary in terms of how they came under the sovereignty of the United States and also in terms of their demographics, such as median age and education levels. Each of the insular areas has its own government and maintains a unique diplomatic relationship with the United States. General federal administrative responsibility for all insular areas but Puerto Rico is vested in the Department of the Interior. All departments, agencies, and officials of the executive branch treat Puerto Rico administratively “as if it were a state”; any matters concerning the fundamentals of the U.S.-Puerto Rican relationship are referred to the Office of the President. Residents of all the insular areas enjoy many of the rights enjoyed by U.S. citizens in the states. But some rights that, under the Constitution, are reserved for citizens residing in the states have not been extended to residents of the insular areas. For example, residents of the insular areas cannot vote in national elections, nor do their representatives have full voting rights in Congress. Residents of all of the insular areas receive federally funded aid for a variety of social programs. Although residents of an insular area do not pay federal income taxes on income earned in that insular area, federal tax policy does play an important role in the economies of the insular areas. Historically, the federal government has used tax policy as a tool to encourage investment and increase employment in the insular areas. Puerto Rico’s Constitution of 1952 defines Puerto Rico as a self-governing Commonwealth of the United States. Although fiscally autonomous, Puerto Rico is similar to the states in many aspects. For example, matters of currency, interstate commerce, and defense are all within the jurisdiction of the U.S. federal government. Puerto Rican residents are required to pay local income taxes on income earned from Puerto Rican sources, but not federal income taxes. Puerto Rican residents, however, do contribute to the U.S. national Medicare and Social Security systems. Generally, federal labor, safety, minimum wage laws and standards also apply in Puerto Rican to the same extent they apply to the states. The federal government plays a pervasive role in Puerto Rico that stems not only from the applicability of the United States Constitution, laws and regulations, but from the transfer to the island of more than $13 billion in federal funds every year to fund social programs to aid Puerto Rican residents, including earned benefits such as Social Security and unemployment benefits. Chapters 2 and 7 of this report discuss in detail the how the U.S. federal tax code applies to residents of Puerto Rico and how the principal U.S. federal social programs are applied in Puerto Rico, respectively. Puerto Rico occupies a central position in the West Indies. It comprises six main islands with a land area of 3,421 square miles and a population of almost four million people. Puerto Rico is thought to have one of the most dynamic economies in the Caribbean region, an economy in which manufacturing, driven by the pharmaceutical industry, has surpassed agriculture as the primary sector in terms of domestic income. Over 40 percent of Puerto Rico’s domestic income since the mid-1980s has been derived from manufacturing. Pharmaceuticals accounted for almost 40 percent of total value added in manufacturing in 1987; that share rose to over 70 percent by 2002. Table 2 describes some of the demographic characteristics of Puerto Rico and compares them to national averages in 2000. possessions has been subject to special tax provisions. The Tax Reform Act of 1976 modified the form of the preferential tax treatment by establishing the possessions tax credit under Section 936 of the Internal Revenue Code. The stated purpose of this tax credit was to “assist the U.S. possessions in obtaining employment-producing investments by U.S. corporations.” Prior to 1994, the possessions tax credit was equal to the full amount of the U.S. income tax liability on income from a possession. The credit effectively exempted two kinds of income from U.S. taxation: income from the active conduct of a trade or business in a possession, or from the sale or exchange of substantially all of the assets used by the corporation in the active conduct of such trade or business and certain income earned from financial investments in U.S. possessions or certain foreign countries, generally referred to as qualified possession source investment income (QPSII). In order for the income from an investment to qualify as QPSII, the funds for the investment must have been generated from an active business in a possession, and they must be reinvested in the same possession. Dividends repatriated from a U.S. subsidiary to a mainland parent have qualified for a dividend-received deduction since 1976, thus allowing tax-free repatriation of possession income. The possessions tax credit was criticized on the grounds that the associated revenue cost was high compared to the employment it generated, that a large share of the benefits of the credit were not reaped by Puerto Rican residents and that it distorted debate over Puerto Rico’s political status. The Omnibus Budget Reconciliation Act of 1993 placed caps on the amounts of possessions credits that corporations could earn for tax years beginning in 1994 or later. The Small Business Job Protection Act of 1996 repealed the possessions tax credit for taxable years beginning after 1995. However, the act provided transition rules under which a corporation that was an existing credit claimant was eligible to claim credits with respect to possessions business income for a period lasting through taxable years beginning before 2006. Additional background on Section 936 of the U.S. Tax Code and the possessions credit is provided in chapters 2 and 4. Several of our previous studies, as well as work done by the Internal Revenue Service (IRS) and the U.S. Census Bureau (Census), address aspects of the Puerto Rican economy discussed in this report, including the business activity of possessions corporations and employment, payroll, value added, and capital expenditures by economic sector. Our previous work also addresses broader trends in the Puerto Rican economy, as does work underway by the Brookings Institution. A related study is also expected shortly by the Joint Committee on Taxation. Its work will evaluate legislative options concerning Puerto Rico. Table 3 highlights the scope of several recent reports on Puerto Rico, as well as the two studies that are in progress. The Chairman and Ranking Minority Member of the U.S. Senate Committee on Finance asked us to study fiscal relations between the federal government and Puerto Rico and trends in the Commonwealth’s economy with a particular focus on the activities of possessions corporations operating there. To determine the U.S. federal tax treatment of individuals and businesses in Puerto Rico, relative to the states and the other insular areas, we examined the IRC, Department of the Treasury regulations, relevant Treasury rulings and notices, and legislation. To compare trends in principal economic indicators for the United States and Puerto Rico, we obtained data from both U.S. and Puerto Rican sources. The trends we present are commonly used measures of overall economic activity and important components of economic activity, such as saving, investment, labor force participation, and unemployment. We reported on many of these indicators in our previous report on economic trends in Puerto Rico. The data shown are largely drawn from the National Income and Product Account series produced annually by economic statistics agencies in the United States and Puerto Rico. Most of the data we used for the U.S. economic series are produced by the Bureau of Economic Analysis and the Bureau of Labor Statistics and are publicly available from the Internet. When we compared U.S. data to Puerto Rican data that are based on the Puerto Rican July 1–June 30 fiscal year, we computed annual U.S. figures using monthly or quarterly data to match the Puerto Rican fiscal year. Most of the annual data we used for Puerto Rican economic trends are produced by the Planning Board of Puerto Rico and are also publicly available. In some instances, the methodologies used by the Planning Board to produce certain data series are outdated relative to the methodologies now used by the United States. For example, the methodology used in calculating certain price indices in Puerto Rico is outdated and the methods used to obtain unemployment data have been somewhat less rigorous than in the United States. In these cases, we reviewed literature concerning the limitations of various series and interviewed Puerto Rican officials about the methods they use to collect and develop their data. These limitations are noted in the report. Wherever possible, we used alternative assumptions and data sources to determine if any conclusions drawn from the data are sensitive to the particular data series used. For example, we applied both U.S. and Puerto Rican price indices to Puerto Rican gross domestic product (GDP) data to see if applying different measures of price changes would lead to different conclusions about whether the Puerto Rican economy has been growing faster or slower than the U.S. economy. Puerto Rico’s Planning Board has recently contracted with several consultants for a review of their entire set of methodologies for preparing the Commonwealth’s income and product accounts, including the deflators. The Board has also been negotiating a memorandum of agreement with the U.S. Bureau of Economic Analysis for the latter to provide advice on this effort. For some indicators of interest, annual data are not available for Puerto Rico. In some of these cases, we used decennial census data. The decennial census covers both the United States and Puerto Rico and produces comparable statistics on educational attainment and poverty levels. We also used data from the Economic Census of Puerto Rico and the Economic Census of the United States, also produced by Census. These data included detailed information on employment, investment, and value added broken down by sector of the economy. These data, produced by Census every 5th year, are of particular relevance to the possible effects of phaseout of the possessions tax credit. To provide information on what is known regarding the flow of capital into and out of Puerto Rico, we interviewed Puerto Rican government officials and private sector experts to help us to ascertain what data were available. We determined that the available data would not allow us to present a comprehensive picture of the trends in capital flows. The most significant gap in that picture is data relating to direct investment by corporations incorporated outside of Puerto Rico, which is financed from within their own affiliated groups, rather than through financial institutions. We can, however, report on changes over the years between 1995 and 2004 in the amount of funds that nonresidents hold in the Puerto Rican banking system and the amount of funds that the banking system invests within and outside of the Commonwealth. In order to identify where the assets held in the Puerto Rican banking system are invested and where the owners of the banks’ liabilities reside, we analyzed institution-specific data that the Office of the Commissioner of Financial Institutions (OCFI) collects for oversight purposes. Banks and certain other financial institutions in Puerto Rico are required to report detailed information regarding their assets, liabilities, and capital to the OCFI through a computerized “CALL report” data system. Appendix I describes our analysis of the financial data. We also used data provided by Puerto Rico’s Government Development Bank to show trends in Puerto Rican government borrowing in the U.S. and local capital markets. The consensus of the government and private sector financial experts whom we interviewed was that all Puerto Rican government bonds that qualify for tax exemption under Section 103 of the IRC, such as bonds that are issued for the purpose of capital improvement projects, are sold in the U.S. market. All other Puerto Rican government bonds that are taxable in the United States but tax exempt in Puerto Rico are sold in the local market. The Government Development Bank was able to provide us with a complete and detailed accounting of each of their debt issues and to identify which ones did or did not qualify for the U.S. tax exemption. In order to examine changes in the activities of possessions corporations operating in Puerto Rico since the early 1990s, we constructed several databases from an assortment of tax return data we obtained from IRS and Puerto Rico’s Department of Treasury. Our principal source of data was IRS’s Statistics of Income unit (SOI), which compiles comprehensive data on possessions corporations every other year. We obtained the complete set of these biennial databases from 1993 through 2003 and used information from SOI to identify those possessions corporations that operated in Puerto Rico. For the first stage of our analysis, we linked the biennial records for each individual corporation by its employer identification number (EIN) so that we could identify any data gaps for specific corporations in particular years and so we could complete a second, more complicated data analysis (described below). We filled in missing data for individual corporations to the extent possible from other IRS files and through imputations based on surrounding-year data. The extent of the imputations were minimal relative to the population totals we report. We used the final database on 656 possessions corporations that operated in at least 1 year between 1993 and 2003 to report on changes over time in the aggregate income, tax credit, and total assets of this population of corporations and to show how these particular variables were distributed across different industries. We also used data from the past four Economic Censuses of Puerto Rico (1987, 1992, 1997, and 2002) compiled by Census to show how the importance of possessions corporations in Puerto Rico’s manufacturing sector has changed over time. For the second stage of our analysis, we focused on a subpopulation of the largest groups of affiliated possessions corporations operating in Puerto Rico. For each of these groups we compiled data on other affiliated corporations (i.e., those sharing the same ultimate parent corporations) that also operated in Puerto Rico, but were not possessions corporations. The objective of this analysis was to assess the extent to which the large corporate groups that accounted for most of the activity of possessions corporations remained active in Puerto Rico, even as the operations of their possessions corporations were being phased out. We started by identifying the 77 largest groups of possessions corporations in terms of the amount of credit they earned, their total income, and their total assets. These large groups gave us a subpopulation that accounted for over 90 percent of the tax credit and income earned and over 90 percent of the assets owned by possessions corporations between 1993 and 2003, and at the same time reduced the number of corporations we had to work with from 656 to 172. This reduction in the number of corporations we had to work with was important because data limitations caused some of the steps in our database development to be very labor intensive. We used two key data sources to identify and obtain data for the members of the large groups that operated in Puerto Rico but which were not possessions corporations. The first source was the database in which IRS maintained the records of all forms 5471 that had been filed between 1996 and 2002. (The owners of controlled foreign corporations must file a separate form 5471 every year for each CFC that they own.) The second source was a database that the Puerto Rican Department of Treasury (with the assistance of the Government Development Bank) had recently transcribed from all Puerto Rican tax returns for tax years 1998 through 2001 filed by all corporations or partnerships that received tax incentives from the Government of Puerto Rico. Officials from the Department of Treasury and from the Puerto Rico Industrial Development Company (PRIDCO) told us that almost all U.S.- or foreign-owned manufacturing corporations operating in Puerto Rico receive tax incentives, as do corporations in designated service industries that export products or services from Puerto Rico. A total of 1,758 different taxpayers appeared in the database for at least 1 of the tax years. We used a series of both automated and manual search and matching approaches to link the CFCs and other types of companies from these two databases to our 77 large corporate groups. We also used information from both databases to determine which of the CFCs had operations in Puerto Rico and, in the case of CFCs with operations in multiple countries, to make a range of estimates for the amount of income they earned in Puerto Rico. The data on income, assets, taxes paid, and place of incorporation that we extracted from the two databases for these linked corporations allow us to provide a more complete picture of the trends in activities of the corporate groups that have taken advantage of the possessions tax credit over the years. Through interviews with officials from the agencies providing the data and our own computer checks for internal consistency in the data, we determined that the quality of the data was sufficient for the purposes of our report when viewed with the cautions we raise at various points in the text. One problem that afflicted all of the databases to some degree was missing values arising from the fact that IRS and the Puerto Rican Department of Treasury could not always obtain every tax return that should have been in their databases in a particular year and the fact that taxpayers did not always accurately fill in every line of the return that they should have. Our access to multiple databases that overlapped to some extent enabled us to address this problem by filling in gaps with data from an alternative file, making reasonable imputations, or at a minimum assessing whether missing values would have made a significant difference to our results. In order to show how economic activity in Puerto Rico is distributed across different forms of businesses, we negotiated a special arrangement with IRS and Census that enabled us to disaggregate the data from Census’s recently completed 2002 Economic Census of Puerto Rico by categories of business entities that are more specifically relevant to tax policymakers than the categories Census uses for its own publications. The 2002 Economic Census collected data on employment, payroll, and other economic measures from all nonfarm, private sector employers in Puerto Rico, making it a comprehensive enumeration of Puerto Rican businesses. We used taxpayer data from IRS and Puerto Rico to determine, in as many cases as possible, the type of federal or Puerto Rican income tax return each of these employers filed and, in the case of corporations, where they were incorporated. We then used this information to place each employer into a business entity group, such as possessions corporation, CFC incorporated in Puerto Rico, CFC incorporated elsewhere, sole proprietor, and so forth. Census then provided us with tabulations of their data for each of these groups, disaggregated by industry to the extent that their disclosure rules would permit. We developed a coding system and a data- exchange procedure that enabled us to link tax and Census data for specific employers in such a way that Census did not have to view restricted IRS data and we did not have to view confidential Census data for specific survey respondents. (See app. III for details.) The data that we used to determine the tax filing status and place of incorporation for the employers in the Census database came from the IRS and Puerto Rico databases described above, plus a couple of additional sources. Another important new source of data was IRS’s National Accounts Profile (NAP) database, which contains selected information for all individuals and businesses that have an EIN. Each employer in Puerto Rico has a federal EIN because it must collect Federal Insurance Contributions Act (FICA) taxes on behalf of its employees. Consequently, we were able to access NAP data for a very high percentage of the employers included in the Census. For those employers we were able to determine what, if any, federal income tax form they were required to file, whether they were included in their parent corporation’s consolidated return, and whether or not IRS had identified them as being sole proprietors. The other data sources that we used for this particular analysis included sets of income tax returns for some of the businesses operating in Puerto Rico that IRS had provided to Census, and a list of CFCs operating in Puerto Rico that PRIDCO had compiled. None of the non-Census data sources that we used was comprehensive and some of the sources more closely met our needs than others. Appendix III describes how we used these data to place each employer into a business entity group. For those cases where we could not reliably place an employer into a group based on tax data or data from PRIDCO we asked Census to place them into certain groups based on their survey responses. To compare the overall tax burden borne by individuals and businesses in Puerto Rico with the burden borne by individuals and businesses in the states and in the other insular areas, we obtained and analyzed detailed data on state and local government revenues from the U.S. Census of Governments, data on Commonwealth government revenue from the Puerto Rican Department of Treasury, data on municipal tax revenue in Puerto Rico from Oficina del Comisionado de Asuntos Municipales, Centro de Estadisticas Municipales, and revenue data for the other insular areas reported in their 2002 Single Audit reports. We also obtained data on federal taxes collected in Puerto Rico and the states from IRS’s 2002 Data book. (No such data were available for the insular areas.) We compared taxes paid on a per capita basis and as a percent of personal income. We make our comparison for year 2002 because that is the year of the most recent Census of Governments. We also compared federal expenditures for the states, Puerto Rico, and the insular areas using data we obtained from the Consolidated Federal Funds Report for Fiscal Year 2002 and the Federal Aid to States for Fiscal Year 2002. In addition, we report specifically on transfers of excise tax and customs duty revenues that the federal government makes to Puerto Rico using data obtained from U.S. Customs and the Alcohol and Tobacco Tax and Trade Bureau. To assess the reliability of the data, for the Census and Puerto Rican Treasury data we interviewed knowledgeable officials and reviewed supporting documentation to understand the internal procedures in place to ensure data quality. For the insular areas we compared data reported in the Single Audit reports to other published data. We determined that the data we obtained from the Puerto Rican Department of Treasury is consistent with what was reported in the Commonwealth’s Comprehensive Annual Financial Report. Although we found the data reliable for the purpose of our engagement, we note certain limitations in the data. In particular, all the state and local data compiled by Census are as-reported by cognizant government officials responsible for financial matters in each of the political entities and may not have been subjected to any internal or external accuracy checks. Checks performed by Census on its data are for completeness and consistency with internal and external sources. The independent auditor’s statement in the Single Audit reports for the insular areas indicated that the auditors generally could not verify the accuracy of reported information. In addition, federal, state, and insular area fiscal years differ, so the data do not cover exactly the same period of time. Interviews with federal agencies and prior GAO work provided the basis for our description of the application of the principal U.S. federal social programs to Puerto Rico residents, relative to the states, and the other insular areas. To select the social programs included in this report we consulted with GAO experts in the areas of health care policy; education, workforce, and income security policy; and financial markets and community investment policy. With the help of these experts, we arrived at a list of the principal federal social programs, which we then pared down, based on program availability in Puerto Rico and expenditure level in Puerto Rico. We relied on prior GAO work and interviews with federal agency officials to determine how each program is applied in Puerto Rico, relative to the other areas. We used program-level data, supplied by federal agencies, to report program expenditures for fiscal year 2002. We selected fiscal year 2002 because in chapter 6 of this report, we provide a more complete analysis of the revenue and expenditures of Puerto Rico, the states, and the other insular areas using the year of the most recent Census of Governments, 2002. Our methodologies for each objective were discussed with experts including those from the Office of the Comptroller General of Puerto Rico, Puerto Rico’s Government Development Bank, Puerto Rico’s Planning Board, Puerto Rico’s Office of the Commissioner of Insurance Institutions and Puerto Rico’s Office of the Commissioner of Financial Institutions. Federal-level experts include those from Census and IRS. Our work was performed from February 2004 to April 2006 in accordance with generally accepted government accounting standards. Individuals who are residents of Puerto Rico or other U.S. insular areas and who earn income only from sources outside of the states generally pay no federal income tax; however, their wages are all subject to Social Security and Medicare taxes, and wages paid to residents of Puerto Rico and the U.S. Virgin Islands also are subject to federal unemployment tax. Corporations organized in Puerto Rico, like those organized in the other U.S. insular areas, are generally treated for U.S. tax purposes as if they were organized under the laws of a foreign country. Until this year, special rules enabled corporations organized in the United States that met certain conditions to reduce the federal tax payable on income earned in and repatriated from Puerto Rico and other insular areas. Individuals residing in an insular area and who earn income only from sources there file one income tax return there and are required to pay income tax only to that area. The U.S. income tax treatment of U.S.-source income of residents of an insular area (which does not include income earned in the insular areas, other than that earned by U.S. government employees) depends on the area: Residents of American Samoa and Puerto Rico must pay U.S. income tax on all their income from sources outside American Samoa or Puerto Rico, respectively, if such income exceeds the federal filing threshold. The U.S. government retains the tax collected from residents of Puerto Rico, but is required to transfer the tax collected from residents of American Samoa to its government. Residents of Guam and CNMI owe income tax to the territory and Commonwealth, respectively, on their U.S.-source income; the governments of these Commonwealths and territories are required to transfer a portion of this tax revenue to the U.S. government if the resident’s income exceeds certain income thresholds. Generally, the U.S. government does not tax, or receive any tax revenue from U.S. Virgin Island residents who have U.S.-source income so long as such residents report all of their income, identify the source of their income, and pay their income taxes to the U.S. Virgin Islands. The U.S. income tax treatment of U.S. residents with Commonwealth- or insular area–source income also depends on the insular area: U.S. residents with income from Puerto Rico or American Samoa are subject to U.S. federal tax on that income. They also pay tax on that income to Puerto Rico or American Samoa, respectively, and receive a foreign tax credit against their U.S. tax liability for this amount. U.S. residents with income from Guam or CNMI owe U.S. income tax on that income; the federal government is required to transfer a portion of the tax revenue received from Guam and CNMI residents back to the respective territory and Commonwealth. U.S. residents who earn income in the U.S. Virgin Islands must file identical tax returns with both the government there and the U.S. government; each government’s share of the revenues is based on where the income was earned. The Federal Insurance Contributions Act imposes wage-based taxes on employers and employees in the United States and the Commonwealths and territories to support Social Security and Medicare. The employment upon which taxes are collected includes services performed in the United States and the insular areas. Taxes collected under the act are not transferred to the treasuries of the insular areas. The Federal Unemployment Tax Act imposes a tax on wages paid to employees, based on wages paid. Puerto Rico and the U.S. Virgin Islands are the only insular areas covered by the Act. The proceeds of the tax are used to support the federal-state unemployment compensation program and are not transferred to the treasuries of either area. The federal government taxes a U.S. corporation on its worldwide income (reduced by any applicable foreign income tax credit), regardless of where the income is earned. When the tax is due depends on several factors, including whether the income is U.S.- or foreign-source and, if it is foreign income, on the structure of the corporation’s business operations. However, since 1976, and through taxable years beginning prior to December 31, 2006, U.S. corporations with a domestic subsidiary conducting a trade or business in insular areas could qualify to receive significant tax benefits through the possessions tax credit. Prior to taxable years beginning in 1994, the credit effectively exempted from U.S. taxation all possession-source income of a qualified possessions corporation. Dividends repatriated from a wholly-owned possessions corporation to the mainland parent qualified for a 100 percent deduction, thus allowing tax- free repatriation of possession income. The credit also exempted qualified possession-source investment income (QPSII), which is certain income the possessions corporation earned from financial investments in U.S. possessions or certain foreign countries. The credit for qualified research expense was also allowed for such research conducted by a possessions corporation. Starting in taxable years beginning in 1994, the amounts of possessions tax credits that a possessions corporation could claim were capped. Under the cap, a possessions corporation had to choose between two alternatives—a “percentage limitation” option or an “economic activity limitation” option. In 1996, the possessions tax credit was fully repealed for taxable years beginning after 2005. Existing possessions corporations could continue to claim the possessions tax credit for tax years beginning prior to 2006. These existing credit claimants, however, were subject to an income cap based on the average business income that the corporation earned in a possession during a specified “base period.” A possessions corporation electing the percentage limitation was subject to the income cap beginning in 1998 and a possessions corporation electing the economic limitation was subject to the income cap beginning in 2002. Only QPSII earned before July 1, 1996, qualified for the credit for tax years beginning after December 31, 1995. Corporations organized outside the United States, including corporations organized in Puerto Rico and the other insular areas, are generally treated as foreign corporations for U.S. tax purposes. These corporations are taxed on their U.S. source earnings—the tax paid generally depends on whether the income is “effectively connected” with the conduct of a trade or business within the United States, but income from insular areas is not subject to U.S. tax. Foreign corporations pay U.S. tax at two rates—a flat 30 percent rate is withheld on certain forms of nonbusiness gross income from U.S. sources, and a tax is imposed at progressive rates on net income from a U.S. trade or business. Corporations in Puerto Rico must pay the 30 percent withholding tax; corporations in the other insular areas do not pay the withholding tax if they meet certain tests that establish close connections with the insular area in which the corporation was created. U.S.-source dividends paid to corporations organized in Puerto Rico are subject to a 10 percent withholding tax provided that the same tests mentioned above are satisfied and the withholding tax on dividends paid to the U.S. corporations is not greater than 10 percent. Corporations organized under the laws of an insular area may be treated as a controlled foreign corporation (CFC) for U.S. income tax purposes. To qualify as a CFC, the corporation must be more than 50 percent U.S.- owned, taking into account only U.S. shareholders that meet a 10 percent stock ownership test. Gross income from the active conduct of business in Puerto Rico or elsewhere outside of the United States is not taxed until it is repatriated to the U.S. shareholders in the form of dividends. Subject to certain limitations, these shareholders are entitled to a credit for any foreign income taxes paid by the CFC with respect to the earnings distributed. Certain types of passive income, such as dividends and interest, earned by CFCs are currently includable in the income of the U.S. shareholders, under subpart F of the U.S. Tax Code, even though those amounts are not actually distributed to them. These shareholders are, subject to certain limitations, also entitled to a credit for foreign income taxes paid with respect to the amounts includible in income under subpart F. Certain kinds of income received by a CFC organized under the laws of an insular area are not considered subpart F income: income received from the sale in the insular area of personal property manufactured by the CFC in that area, dividend or interest income received from a related corporation also organized under the laws of that insular area, and rents or royalties from a related corporation received by a CFC organized under the laws of an insular area for the use of property in the insular area where the CFC is organized. The allocation of gross income, deductions, and credits between related taxpayers, such as intercompany sales from a CFC to a U.S. domestic parent, is subject to transfer pricing rules that are designed to prevent manipulation of the overall tax liability. In 2004, in response to a long-running dispute with the European Union, Congress repealed the extraterritorial income (ETI) exclusion and enacted a deduction relating to income attributable to domestic production activities. For purposes of the ETI exclusion, the United States included Puerto Rico. Puerto Rico is not included, however, in the definition of U.S. for purposes of the deduction for domestic production. Merchandise imported into an insular area from the United States is exempt from U.S. excise taxes. The only U.S. excise taxes that apply to products imported into any of the insular areas from another country are those where specific language extends the tax beyond the “United States,” which is generally defined, for tax purposes, as only the states. This language exists for a tax on petroleum (an environmental tax), a tax on certain vaccines, a tax on certain chemicals, and a tax on certain imported substances. If any revenue from these excise taxes is collected in American Samoa, Puerto Rico, or the U.S. Virgin Islands, the U.S. government retains the revenue. The governments of Guam or CNMI receive any revenue from these taxes collected in their respective territory and Commonwealth. There is a special “equalization” U.S. excise tax on articles manufactured in Puerto Rico or the U.S. Virgin Islands and exported to the United States equal to the tax that would have been imposed had the articles been manufactured in the United States. Subject to the limitations described below for distilled spirits, the U.S. Treasury returns all the revenue from the tax on articles manufactured in Puerto Rico to the Treasury there except the amounts needed to pay refunds and drawbacks to manufacturers and the amount needed to cover its enforcement expenses. The return to the U.S. Virgin Islands also excludes amounts needed to pay refunds and drawbacks, plus one percent of the total tax collected. All U.S. excise taxes collected on articles manufactured from Guam and CNMI and exported to the United States must be transferred to their respective territory and Commonwealth governments. A special limitation applies for the U.S. excise tax on distilled spirits manufactured in Puerto Rico and the U.S. Virgin Islands and exported to the United States. The tax rate ordinarily applied to rum is $13.50 per proof gallon exported, of which $10.50 per proof gallon is returned to the appropriate insular area. Puerto Rico and the U.S. Virgin Islands also share revenue from the U.S. excise tax collected on all rum imported into the United States from a foreign country. Their respective shares are proportionate to the relative sizes of their rum exports to the United States during the prior fiscal year. Puerto Rico’s share, however, cannot exceed 87.626889 percent or be less than 51 percent while the U.S. Virgin Islands’ share cannot exceed 49 percent nor drop below 12.373111 percent. The U.S. government collects duties on goods imported into “U.S. customs territory,” which encompasses the states and Puerto Rico, unless they are exempt. U.S. customs duties collected in Puerto Rico are deposited in a special U.S. Treasury account. After deductions for refunds and the expenses of administering customs activities in Puerto Rico, the remaining amounts are transferred to the treasury there. Although the U.S. Virgin Islands are not in “U.S. customs territory,” the U.S. government helps collect local duties there. These collections are transferred to the government of the U.S. Virgin Islands after items such as operational expenses are deducted. The U.S. government has authority to administer and enforce collection of custom duties in American Samoa, upon request of the Governor. Guam and CNMI administer and enforce their own customs policies and procedures. Items imported into “U.S. customs territory” from American Samoa, Guam, CNMI, and the U.S. Virgin Islands are subject to U.S. customs duties unless the items are exempt. The economic well-being of Puerto Rican residents, measured in terms of either per capita or median income, remains well below that of residents of the states. The relative progress that the Puerto Rican economy has made since 1980 is difficult to measure with precision for a number of reasons, including tax-induced distortions in how U.S. corporations have reported income earned in the Commonwealth. The low rate of labor participation is a crucial issue in Puerto Rico’s economic performance, and the rate of investment appears insufficient to significantly reduce the disparity between mainland and Puerto Rican incomes. As shown in figure 9, Puerto Rico’s per capita GDP of about $21,000 in 2005 remained well below U.S. per capita GDP of about $41,000. GDP is a broad measure of overall income or economic activity occurring within a nation’s borders in a given year. According to the Puerto Rican and U.S. national income and product accounts, this measure has grown more rapidly in Puerto Rico than in the United States since 1980, when viewed on a per capita basis after adjustments for inflation. However, for a number of reasons, the growth rate of real (meaning inflation-adjusted) GDP likely does not represent a very accurate measure of changes in the economic well-being of Puerto Rican residents. First, as a result of U.S. tax provisions and a development strategy pursued by successive Puerto Rican governments to use local tax incentives to attract investment by U.S. and foreign firms, a significant amount of the investment income included in GDP is paid out to U.S. and foreign investors. In figure 9, the income earned by nonresidents is approximately represented by the gap between Puerto Rican GDP and Puerto Rican GNP. GNP is a measure of the total amount of income earned by residents in a given year from sources within and from outside of the country. In contrast to Puerto Rico, GDP has been consistently about the same as GNP in the United States, which indicates that the amount of income earned abroad by U.S. residents is close to the amount of income earned by foreign owners of assets located in the United States. As of 2005, Puerto Rico’s per capita GNP of about $14,000 remained well below the U.S. level of about $41,000. Second, using the possessions tax credit, U.S.-based groups of affiliated corporations (i.e., those owned by a common U.S. parent corporation) with certain types of operations in Puerto Rico have had incentives to attribute as much net income to those operations as is legally permissible, rather than to related operations in the United States. Moreover, the nature of these incentives has changed during the period covered by our review. Consequently, the income reported by these corporations to have been earned in Puerto Rico in a given year may overstate the actual economic importance of their Puerto Rican production, and changes in income over the years may reflect not only changes in the economic activity of these corporations, but also changes in how corporations have computed their Puerto Rican source income. Some of the data reported later in this chapter suggest that this so-called “income shifting” has taken place. This particular issue affects data on GDP and income and possibly value added for corporations owned by U.S. parent corporations; it should not affect GNP or income and value added for Puerto Rican-owned corporations. Third, as is the case for any country, the scale of the informal, or underground, economy in Puerto Rico is difficult to measure. If the informal economy in Puerto Rico is large relative to the informal economy in the United States, as some analysts believe, a relatively large amount of economic activity in Puerto Rico may not be reflected in national income and labor market statistics. As discussed below, the presence of a large informal economy may be one explanation of low reported labor force participation rates in Puerto Rico. Analysts who have recently looked at this issue disagree on the size of the informal economy and on whether it has been growing as a share of the total economy. The size and any growth in the informal economy in Puerto Rico, relative to that in the United States, would affect comparisons between levels and growth in per capita income earned in the two jurisdictions. Lastly, as acknowledged by the Puerto Rico Planning Board, there are problems with some Puerto Rican price indices, which cause an unknown degree of inaccuracy in the inflation adjustments to the long-term trend data on the Puerto Rican economy and, therefore, some imprecision in the real growth rates of key economic indicators that are stated in terms of dollar values. Most concerns center on the Puerto Rican consumer price index (CPI, a measure of prices on consumer goods) and the fact that the market basket of goods used to compute the index has not been updated since the 1970s. This means that the index will tend to overstate price changes. In the analysis in this chapter, we have used the Puerto Rican gross product deflator—a broad measure of how prices have changed on average for goods and services in the economy—for our inflation adjustments. Although analysts within and outside of Puerto Rico’s Planning Board, which produces the deflator, consider it to be less problematic than the CPI, they still have concerns relating to fact that the CPI is one of the components used in estimating the deflator and the fact that methodologies for other components are also outdated. Given the concerns with the Puerto Rican deflator, there is a question as to whether that measure or the U.S. gross product deflator more accurately accounts for the changes in prices in Puerto Rico. The U.S. deflator shows slower price increases over this period than does the Puerto Rican deflator. For this reason, we also report some results based on the use of the U.S. deflator in cases where they differ notably from those based on the Puerto Rican deflator. When comparing the trends in real per capita GNP in Puerto Rico and the United States from 1980 to 2005, the choice of deflators does make a difference. Over that period, inflation-adjusted per capita income increased at an average annual rate of 1.9 percent in the United States, while it rose at 1.5 percent in Puerto Rico if the Puerto Rican deflator is used. However, if the U.S. deflator is applied to Puerto Rican GNP, annual real per capita GNP rose by 2.5 percent annually, faster than the growth in the United States. Real per capita GDP rose more rapidly in Puerto Rico than in the United States, regardless of which deflators are used. U.S. GDP rose at an annual average rate of 1.9 percent from 1980 to 2005, while the average annual growth rate for Puerto Rico was 2.1 percent using the Puerto Rican deflator and 3.2 percent using the U.S. deflator. Figure 10 shows the composition of Puerto Rican GDP over time and the trend in net income payments abroad. GDP consists of expenditures on personal consumption, investment, government consumption of goods and services, and net exports (the value of exports minus the value of imports). The figure shows that net exports have risen substantially from 1980 to 2005 as a share of GDP, and consumption, which is largely determined by Puerto Rican income, has fallen as a share of GDP. Figure 10 also shows net income payments abroad, expressed as a share of GDP. This series represents the amount of income paid to foreign owners of capital located in Puerto Rico, minus income earned by Puerto Ricans from investments outside of Puerto Rico. GNP differs from GDP by this amount. For Puerto Rico, the net outflow of income has increased as a share of GDP over the period, increasing the gap between GDP and GNP. Figure 11 shows the relationship between savings and investment in Puerto Rico. The components of total national saving in Puerto Rico are personal saving, government saving, business saving through retained earnings, and depreciation. The figure shows that investment in Puerto Rico has been greater than national saving, highlighting again that investment in Puerto Rico has been significantly financed by foreign sources. Since 2001, government saving has fallen and undistributed corporate profits have risen significantly. The personal saving rate as measured in the Puerto Rican national accounts has been negative since 1980. If transfers from foreigners to residents of Puerto Rico are underreported, however, the official data for income and saving would also be understated. We cannot provide a comprehensive picture of the trends in various components of U.S. and foreign investment in Puerto Rico because data are not available for one of the most important components—direct foreign investment, for which corporations obtain financing from within their own affiliated groups, rather than through financial institutions. We can, however, report trends for foreign funds flowing through key types of financial institutions and the Puerto Rican government. In the next two chapters, we will also provide some information on investments by important subpopulations of corporations. Over the past decade, the amount of nonresident funds flowing into depository institutions in Puerto Rico has increased steadily. Figure 12 shows Puerto Rico’s depository institutions’ liabilities between 1995 and 2004, and figures 52 and 53 in appendix II show the shift in deposits and debt, respectively. The composition of deposits has changed significantly with “exempt investments” by possessions corporations (which in the past had been encouraged by a special component of the possessions tax credit) being replaced by deposits obtained through brokers that sell certificates of deposits for the banks in the U.S. capital market. (Fig. 54 in app. II shows those offsetting trends.) Figure 13 below shows that the share of assets held by depository institutions in the United States and foreign countries has also increased over the past decade. A large part of this growth can be attributed to the increase in U.S. and foreign securities investments. Loans made by Puerto Rico’s depository institutions, which we assume to be primarily local, have also increased steadily. Figures 55 and 56 in appendix II show these two trends. Puerto Rican government debt has increased steadily over the past decade. Between 1995 and 2005, Puerto Rico’s real total public debt outstanding increased from $25.6 billion to $36.4 billion (see fig. 14 below). Most of Puerto Rican public debt is sold in the U.S. market, but the amount sold within Puerto Rico has increased steadily since 1999. In 2005 an estimated $31.6 billion was sold in the United States, and $4.8 billion was sold locally in Puerto Rico. In appendix II we include both the breakdown of debt payable by the government and debt issued by the government but repaid by others (such as the federal government or the private sector) because there are differences of opinion about what should be termed “government debt” (see figs. 58 and 59). An example of this type of debt is the series of bond issues linked to The Children’s Trust Fund between 2001 and 2005, all of which are backed by assets from the United States Attorney General’s 1999 Master Tobacco Settlement Agreement. Between 1995 and 2005, total debt issued by the Puerto Rican government, but payable by others, increased from an estimated $6.6 billion to an estimated $7.1 billion in 2005. Figure 15 shows the level and composition of gross investment spending in Puerto Rico from 1980 to 2005. During the recession of the early 1980s, investment fell below 10 percent of GDP by 1983. Thereafter, investment recovered and remained around 15 percent of GDP for a number of years until a period of rapid growth in largely private-sector investment in the late 1990s pushed the share close to 20 percent of GDP by 2000. Investment rates have fallen back to about 15 percent of GDP most recently. If Puerto Rico’s investment rate remains at recent levels, the gap between U.S. and Puerto Rican per capita incomes is unlikely to diminish. The U.S. investment rate, including both private investment and a measure of government investment, has been about 19 percent of GDP in recent years. Continuation of these relative investment rates implies that the per capita income gap is unlikely to narrow significantly, unless capital formation is augmented by increases in employment, education, training, or other types of productivity improvements. Figure 16 shows a breakdown of Census data on capital spending in the manufacturing sector for 1987, 1992, 1997, and 2002. The data show that investment in manufacturing dipped significantly between 1992 and 1997, before rebounding by 2002. This slump in investment does not appear in the Planning Board investment data for private sector investment shown in figure 15. The Planning Board data cover more sectors than do the Census data; however, investment in manufacturing should represent a substantial portion of the investment in private structures and machinery. Although both Census data on value added and Puerto Rican government data on domestic income show that the pharmaceutical industry has significantly increased its already dominant position in the manufacturing sector since the early 1990s, evidence suggests that income shifting within U.S.-owned corporate groups likely has resulted in overstatements of the importance of the manufacturing sector, as a whole, and the pharmaceutical industry, in particular, when measured in terms of value added or income. Unfortunately, it is difficult to know the extent of any overstatement in these economic variables. Evidence is mixed as to whether the extent of the overstatement increased as the pharmaceutical operations of possessions corporations were shifted over to other types of businesses. Other measures of economic activity, such as employment and capital spending, should not be affected by income shifting and, therefore, can be used to either support or challenge conclusions based on measures of value added and income. Census data on value added and Puerto Rican Planning Board data on domestic income both show steady and significant growth in the pharmaceutical industry. Figure 17 shows that value added in the pharmaceutical industry more than doubled in real terms from 1992 to 2002, while value added in all other manufacturing industries, as a whole, declined. Figure 18 shows that the chemical industry, which consists mainly of pharmaceuticals, saw its share of net manufacturing domestic income increase from around 50 percent in 1992 to over 60 percent in 2005. The strong reported performance of the pharmaceutical sector is the reason that the manufacturing sector has been able to slightly increase its share of domestic income, while the share of income of most other manufacturing industries has declined. Manufacturing’s share of income, shown in figure 19, greatly exceeds its share of employment, as shown in figures 23 and 24. Some of the difference may be attributable to a higher level of labor productivity in manufacturing than in other sectors. Recent research suggests, however, that reported levels of value added in Puerto Rican manufacturing are implausible. For example, the official data imply that labor’s share of value added in manufacturing fell from an average of 50 percent from 1950 to 1970 to only 14 percent in 2004. Similar declines are not evident in data for other sectors or in U.S. manufacturing statistics. Over the years, several analysts have concluded that the incentives provided by the possessions tax credit have led U.S. corporate groups to shift income to Puerto Rican affiliates. Until the mid-1990s, the credit essentially allowed profits earned from qualified Puerto Rican operations to be returned to the mainland free of federal tax (even when largely exempted from Puerto Rican income taxes). In addition, one option under the credit allowed the U.S. corporate parent to apply a 50-50 split of their combined taxable income from the sale of products to third parties if the products were derived from an intangible asset, such as a patent, invention, formula, or trademark. Although a substantial portion of this income can be attributed to manufacturing intangibles developed and owned by the U.S. corporate parent, there is no requirement that the allocation of income from such manufacturing intangible assets reflect where costs were actually generated, or where value was actually added to the products. Consequently, corporate groups that produced pharmaceuticals, or other products whose final values are largely based on the value of intellectual property, were given flexibility under the law to shift net income to the possession corporations operating in Puerto Rico or another insular area. This shifting of income and value added to the Puerto Rican operations of possessions corporations ultimately gets reflected in economic data compiled by the Puerto Rican government, which is based heavily on data pulled from samples of corporate tax returns, and possibly in data that Census collects in its surveys of employers for the economic censuses, if the economic data the employers provide are based on their tax accounts. The nature of income shifting changed significantly after 1995, when the phaseout of the possessions tax credit began. Some of the corporate groups that owned possessions corporations in Puerto Rico began to close or reduce operations in those corporations and shift production to CFCs located on the island. Corporate groups still have some incentives to retain operations in Puerto Rico rather than shift that production to the United States. First, Puerto Rico responded to the phaseout of the credit by increasing the generosity of its own tax incentives. Second, manufacturing income earned from an active trade or business by the CFCs is not subject to federal tax unless it is repatriated to the United States. A change in income shifting has also occurred because the rule for arbitrarily splitting net income 50-50 between Puerto Rican and U.S. operations does not apply to CFCs. Nevertheless, corporate groups may be able to shift income to Puerto Rico through the manner in which they set prices on goods and services transferred among affiliated corporations. Data from the last four economic censuses of manufacturing in Puerto Rico, presented in figure 20, show that valued added per employee in the pharmaceutical industry was already at least twice as high as the ratio for all other industries in 1987 and 1992. The difference between the pharmaceutical industry and the other industries grew larger in 1997 and then broadened dramatically by 2002. The 2002 figure of $1.5 million for value added per employee in Puerto Rican pharmaceutical manufacturing was three times as high as the ratio for the U.S. pharmaceutical industry for the same year. Moreover, while the U.S. ratio grew only 8 percent in real terms between 1997 and 2002, the Puerto Rican ratio grew by 65 percent over that same period. The data on value added per employee by type of business in figure 21 suggest that the sharp increase in that measure between 1997 and 2002 may have been a direct result of the shift in pharmaceutical operations from possessions corporations to CFCs. (These data are derived from a special research effort in which we obtained assistance from Census and IRS to aggregate data from the 2002 Economic Census of Puerto Rico by particular types of business entities, including possessions corporations and CFCs.) The value added per employee of $4.2 million for pharmaceuticals CFCs incorporated outside of Puerto Rico was dramatically higher than for any other type of business in Puerto Rico. The next highest ratio was $1.6 million for pharmaceuticals CFCs incorporated in Puerto Rico, which was still considerably higher than the ratio of $0.9 million for possessions corporations in the pharmaceutical industry. That data, combined with the data in figure 20, suggest a significant change in transfer pricing by large pharmaceuticals groups, which makes it difficult to say how much of the strong reported growth in output and income in the Puerto Rican pharmaceutical industry, and in the manufacturing sector as a whole, represents an increase in actual economic activity. Data on rates of return on assets for possessions corporations and CFCs in the chemical industry do not confirm the conclusion that a dramatic change in income-shifting practices occurred as CFCs replaced possessions corporations in the industry. We used data from federal tax returns to compare various rates of return for CFCs and possessions corporations in the Puerto Rican chemical industry. The comparisons we were able to make for 1997 through 2001 did not show a consistent difference between the two types of corporations. The ratios of gross profits (the closest tax-data equivalent to value added) to total assets for CFCs were significantly higher than those for possessions corporations in both 1997 and 1999, but the ratios were very close together in 2001. We also compared the gross and net operating rates of return of the two types of corporations and found that neither type dominated the other one consistently across the years. The results of our analysis are presented in appendix IV. International trade plays a much larger role in the Puerto Rican economy than it does in the U.S. economy. While the output of an economy (GDP) depends on the difference between exports and imports (net exports), the size of exports and imports relative to GDP are indicators of the importance of trade to the economy. For the United States, exports of goods and services averaged about 10 percent of GDP between 1980 and 2005. Imports increased from about 10 percent of GDP in the early 1980s to about 16 percent of GDP in 2005. While potential distortions in trade data should be kept in mind, the share of exports and imports has been substantially greater in Puerto Rico. For Puerto Rico, the value of exported goods and services as a percentage of GDP grew from about 70 percent of GDP in the 1980s to about 80 percent in 2005. Imports fell as a share of GDP from about 70 percent to about 63 percent in recent years. As reported in the Puerto Rican national accounts, the value of pharmaceutical imports and exports increased substantially from 1996 to 2005. The value of imported pharmaceuticals increased from about 9 percent of all merchandise imports to about 33 percent during that period. As a share of GDP, the value of imported pharmaceuticals increased from about 4 percent to about 15 percent. The value of pharmaceutical exports rose rapidly as a share of merchandise exports—from about 27 percent to about 61 percent. As a percentage of GDP, the value of pharmaceutical exports rose from about 14 percent to about 42 percent. However, as noted above, a significant portion of the recorded increase in Puerto Rico’s trade surplus may reflect changes in transfer pricing, with artificially low values for Puerto Rico’s imports and high values for Puerto Rico’s exports, rather than increased activity. While the United States is the largest trading partner for Puerto Rico for exports and is a large source of Puerto Rican imports, the foreign country share of imports to Puerto Rico has been growing since 1995. In 2005, slightly less than half of the value of imports to Puerto Rico came from foreign countries. About 80 percent of Puerto Rico’s exports go to the United States. Puerto Rico’s overall trade surplus reflects a trade surplus with the United States as Puerto Rico exports more to the United States than it imports from the United States, and a smaller trade deficit with the foreign countries. Figure 22 shows the unemployment rates and labor force participation rates for the United States and Puerto Rico from 1980 to 2005. The unemployment rate has been significantly higher in Puerto Rico than in the United States, and the labor force participation rate has been much lower. Academics and economists from research institutions have offered several possible explanations for the relatively low labor force participation rate in Puerto Rico and attempted to determine which of these factors might be important. While the low labor force participation rate is seen as a crucial issue for the economic performance of Puerto Rico, there is no consensus on its cause. Possible explanations for the low labor force participation rate include the migration of Puerto Rican citizens with the most interest in participating in the labor force to seek higher wage employment in the United States, leaving residents that have relatively less attachment to the labor force; the fact that government programs that are in place, such as the Nutrition Assistance Program (NAP, the Puerto Rican food stamp program) and disability insurance, can discourage work, while the U.S. program that encourages labor force participation—the Earned Income Tax Credit—is not a part of the tax system in Puerto Rico; the fact that the U.S. minimum wage applies in Puerto Rico may discourage business demand for lower-skilled workers, who are likely to make up a larger share of the potential work force in Puerto Rico than in the United States; and that a relatively large share of Puerto Ricans work in the informal economy and that this work is not reflected in economic statistics. Regarding this last issue, analysts have raised issues with the quality of the Puerto Rican labor force survey, which is the data source for the unemployment rate and the labor force participation rate. The survey is designed to be similar to the U.S. Current Population Survey (CPS), from which the U.S. data are derived, but the questions regarding labor market activity in the surveys differed and the question asked by the Puerto Rico household survey may not have captured work activity in the informal sector of the economy as well as the question asked in the CPS. On the other hand, labor force participation as measured in the decennial census—which uses the same question as the CPS—has also been low and the estimate for 2000 was lower than the household survey estimate for that year. The Bureau of Labor Statistics (BLS) has been working with the Puerto Rican government to improve the household survey in several areas. In addition, labor force data for 2005 are scheduled to be reported for Puerto Rico as a part of the Census Bureau’s American Community Survey effort. Educational attainment can play an important role in developing labor market skills. Data on educational attainment in Puerto Rico is collected in the decennial census and can be compared to data for the United States. These data show that the gap in educational attainment between Puerto Rico and the United States narrowed significantly during the 1990s. Nonetheless, in 2000, 40 percent of the population over 25 in Puerto Rico had not finished high school, which is nearly the double the U.S. share. At the same time, about 38 percent of adults reported having at least some college education (see table 4). Recent research concluded that there is a substantial mismatch between Puerto Rico’s industry structure and the educational achievement of its population. While the mean years of schooling among Puerto Rican adults was substantially below that of any state in the last three censuses, the average years of schooling of people typically employed by the industries operating in Puerto Rico exceeds that of at least two-thirds of the states. The researchers suggest that the Puerto Rican economy has failed to generate jobs that fit the educational qualifications of the Commonwealth’s population. In some sense, therefore, Puerto Rico’s “missing jobs” can be found in labor intensive industries heavily reliant on less-educated workers. The authors conclude that the Possessions Tax Credit and other federal tax incentives contributed to an industry structure that is poorly aligned with the sort of job opportunities needed by Puerto Rico’s population. Annual data on employment in Puerto Rico come from two sources: the Puerto Rico household survey, and the BLS establishment survey. The Puerto Rico household survey has consistent sector definitions across time and includes the self-employed. The establishment survey data are limited to employees and reflect the new North American Industry Classification System industry definitions. In the figures that follow, we aggregated some of the industry categories and show the distribution of employment by sector. Both surveys show employment in Puerto Rico generally increasing since 1991 and show manufacturing employment declining since 1995. As shown in figure 25, data from the Census of Manufacturing for Puerto Rico for 1997 and 2002 also indicate a decline in manufacturing employment. Manufacturing employment fell by about 27 percent from 1995 to 2005, according to establishment survey data. Both the household and establishment data sources show that the government sector employs a large percentage of workers—about 23 percent in the household survey and about 30 percent in the establishment survey. For the United States, manufacturing employment has been falling, both in absolute numbers of employees and as a percentage of all employees. Between 1980 and 2005, manufacturing employment fell by about 4.5 million employees (about 24 percent). From 1995 to 2005, manufacturing employment fell by about 3 million employees (about 17 percent). As of 2005, manufacturing employees represented about 10.7 percent of all employees. Government employees constituted about 16 percent of total employees in the United States, down from about 18 percent in 1980. Although the likely imprecision of price deflators for Puerto Rico leaves the exact growth rate of real per capita personal income there difficult to determine, the rate has not been sufficient to substantially reduce the gap between U.S. and Puerto Rican living standards. Puerto Rican per capita personal income is well below that in the United States (see fig. 26). As we did in comparing U.S. and Puerto Rican GDP and GNP, we adjusted aggregate per capita personal income data using both U.S. and Puerto Rican price deflators. The growth rate in per capita personal income is somewhat higher in Puerto Rico than in the United States when the U.S. deflator is used to adjust Puerto Rican per capita personal income for inflation. In this case, the average annual percentage increase in Puerto Rican per capita personal income was 2.1 percent while U.S. per capita personal income rose by 2.0 percent on average per year. When the Puerto Rican deflator is used to make adjustments for inflation, Puerto Rican per capita personal income grew at a slower rate (1.1 percent) than in the United States (2.0 percent). The difference arises because the U.S. price deflator increased less than the Puerto Rico price deflator. Using both price indices serves to illustrate the sensitivity of the calculation to the index used. In addition, private income transfers from Puerto Rico emigrants now living in the United States made to Puerto Rican residents may be understated, which would lead to an understatement of Puerto Rican personal income. As U.S. citizens, Puerto Ricans are free to migrate to the mainland United States and return as they wish. According to Census estimates, net migration from Puerto Rico to the United States in the 1980s totaled about 126,000. During the 1990s, net migration was estimated to be about 111,000. Census data show the distribution of income in Puerto Rico and the United States and the percentages of individuals and families with incomes below official poverty lines. The median household income in 1999 was $41,994 in the United States and $14,412 in Puerto Rico. In 1999, 48.2 percent of households in Puerto Rico had incomes below the poverty level, which was nearly four times the U.S. share, as shown in table 5. As the disparity between average incomes in the United States and Puerto Rico suggests, a much higher percentage of Puerto Rican households is in the lower income categories. In 1999, only about 10 percent of U.S. households had annual incomes below $10,000, compared to 37 percent of Puerto Rican households (see table 6). The distribution of income is more unequal in Puerto Rico than in the United States. Economies in general have a small share of households receiving a disproportionately large share of income. As a result, the ratio of mean to median household income exceeds 1.0. As an indication of the greater degree of income inequality in Puerto Rico, the ratio of mean to median household income in 1999 was 1.69 in Puerto Rico compared to 1.35 in the United States. Possessions corporations have played an important role in the Puerto Rican economy, particularly in the manufacturing sector, where they accounted for well over half of valued added throughout the 1990s. Most of the possessions tax credit and income earned by possessions corporations in Puerto Rico has been earned by corporations in the pharmaceutical industry. Once the possessions tax credit was repealed, many of the large corporate groups that owned possessions corporations in Puerto Rico began to shift their operations to other types of business entities. Although the various tax and economic census data that we present in this chapter have significant limitations, we believe that, together, they form the basis for a reasonably accurate picture of the broad changes that have occurred in Puerto Rico’s manufacturing sector over the past two decades. Those data indicate that much of the decline in activity of possessions corporations in the manufacturing sector was offset by the growth in other corporations, so that some measures of aggregate activity remained close to their 1997 levels. For example, value added in manufacturing remained fairly constant between 1997 and 2002. Most of the offsetting growth was concentrated in the chemical industry, which is dominated by pharmaceuticals. Possessions corporations continued to dominate Puerto Rican manufacturing through the mid-1990s, despite the legislative changes that made the possessions tax credit significantly less generous after 1993. According to the 1992 Economic Census of Puerto Rico Manufacturing, these corporations accounted for 42.2 percent of employment and 64.3 percent of valued added in the manufacturing sector (as seen in fig. 27). By the next economic census in 1997, possessions corporations’ share of value added had increased to 72 percent, while their share of employment remained little changed at 40.8 percent. This pattern of growth up to 1997 is also apparent in the data from the federal tax returns of possessions corporations shown in figure 28. The aggregate total income, gross profits, and net income of possessions corporations operating in Puerto Rico all increased slightly between 1993 and 1997 (after adjusting for inflation), although there was a small decline in the corporations’ total assets. The growth in possessions corporation activity occurred despite the limitations that Congress placed on the possessions tax credit after 1993 and a decline in the number of corporations claiming the credit. Figure 29 shows that those limitations significantly reduced the generosity of the credit. Possessions corporations earned about 20 cents of credit for each dollar of income they earned in 1993, but only half that amount by 1997. Over that period, the number of corporations claiming the credit for operations in Puerto Rico fell from 378 to 291 and the amount of credit claimed declined from $5.8 billion to $3.2 billion. The decline in possessions corporation income, value added, and employment began after the Small Business Job Protection Act of 1996, which placed additional limits on the amount of credit that corporations could earn and, more importantly, repealed the credit completely for tax years beginning after 1995, subject to a 10-year phaseout. The generosity of the credit reached a low of less than 7 cents per dollar of income by 1999. The number of corporations claiming the credit fell to 124 by 2003 and the amount of credit they claimed that year fell to $1.1 billion. Moreover, in contrast to the period leading up to 1997, the aggregate total income, gross profits, and net income earned by possessions corporations all declined by more than 50 percent between 1997 and 2003, while their total assets declined by almost 30 percent. The significantly decreased importance of possessions corporations is also apparent in the most recent economic census data (fig. 27), showing that these corporations accounted for only 26.7 percent of manufacturing value added and only 31.8 percent of manufacturing employment in 2002. Most of the possessions tax credit and income earned by possessions corporations in Puerto Rico has been earned by corporations in the pharmaceutical industry. Figure 31 shows that pharmaceuticals corporations earned over half of all the credit earned each year from 1995 through 2003. Figure 32 shows that these corporations earned an even larger share of the aggregate gross profit earned by possessions corporations in each of those years. Manufacturers of beverages and tobacco products, medical equipment, and computers, electronics, and electrical equipment were also heavy users of the credit during this period, though not nearly to the same extent as pharmaceuticals manufacturers. Both of these figures are based on data for possessions corporations in the 77 largest corporate groups operating in Puerto Rico. (See the following section.) Parent corporations have a number of options for conducting business in Puerto Rico if they wish to do so after termination of the possessions tax credit. Large corporate groups are believed to have used at least four different approaches to rearranging their overall corporate structure (including the possessions corporation and their Puerto Rican operations) in anticipation of termination of the possessions tax credit. The U.S. federal tax consequences of these approaches vary as follows: The possessions corporation loses its 936 status but remains a subsidiary incorporated in the United States and is consolidated into its parent’s federal tax return. The parent corporation includes the relevant income and expenses of the subsidiary when computing its own federal taxes. Tax attributes, such as carryovers of certain accumulated losses, of the former possessions corporation would be governed by applicable IRS regulations and guidance. The possessions corporation liquidates into its parent (i.e., it no longer remains a separate corporate entity). Generally, if the parent satisfies certain ownership requirements, no gain or loss would be recognized to either the parent or the subsidiary for U.S. federal income tax purposes. The domestic parent would inherit and take into account certain items of the former possessions corporation, such as earnings and profits, net operating and capital loss carryovers, and methods of accounting. No foreign tax credit is allowed for any foreign taxes paid in connection with the liquidation, and the deduction of certain losses and other tax attributes may be limited. The possessions corporation is converted into or replaced by a CFC. This change can occur if the possessions corporation reincorporates and conducts business as a CFC; if it sells or contributes most of its assets to a CFC; or if it winds down its operations as its parent corporation starts up a new CFC to operate in Puerto Rico. Any income that the replacement CFC earns from the active conduct of business in Puerto Rico or elsewhere outside of the United States generally is not taxed until it is repatriated to the U.S. shareholders in the form of dividends. A number of tax consequences arise in cases where the possessions corporation actually reincorporates as a CFC. There are also significant tax issues (discussed further below) relating to the transfer of assets (through either a contribution or a sale) from possessions corporations to CFCs. The possessions corporation is converted into or replaced by a limited liability corporation (LLC) or partnership. An LLC can elect to be treated as a corporation, as a partnership, or as a “disregarded entity.” If the LLC elects to be treated as a corporation, its net earnings would be included either individually or, if required to file a consolidated return, on its parent’s return. If it chose partnership treatment, the LLC itself would generally not be subject to federal income tax but its income, deductions, gains, and losses would be distributed to its members, who would include such amounts in calculating their federal income tax. If the LLC is treated as a disregarded entity, its income, deductions, gains, and losses are included on the member’s federal tax return. Parent corporations could substantially change the manner in which income from their Puerto Rican business operations were treated for federal tax purposes even without making a formal change in the legal status of their possessions corporations. The parents could simply reduce production by their possessions corporations and start up or expand production in other forms of businesses operating in Puerto Rico. We used tax return data from both IRS and the Treasury of Puerto Rico to track changes in the activity of possessions corporations, as well as to assess the extent to which declines in that activity have been offset by increases in the activity of affiliated businesses operating in Puerto Rico. In order to make this assessment for a particular group of affiliated corporations, we needed to examine data for each member of the group that had operations in Puerto Rico.Given that considerable effort was required to identify the group members that operated in Puerto Rico, we limited our review to the largest 77 groups, which included at least one possessions corporation between 1993 and 2001. These 77 large groups accounted for over 92 percent of the credit and income earned by possessions corporations in every year from 1993 through 2001 and for over 91 percent of the assets owned by such corporations in each of those years. The large groups included a total of 172 possessions corporations that we tracked between 1993 and 2003. The number of possessions corporations that these 77 large groups owned and operated in Puerto Rico declined from a high of 146 in 1995 to 58 by 2003. As of 2001, these groups also conducted operations in Puerto Rico through 49 CFCs and at least 28 other businesses. Fourteen of the groups operated both possessions corporations and CFCs in Puerto Rico in 2001. In the following section we report on trends in the income and assets of these large corporate groups. The popular choice of replacing the operations of possessions corporations with CFCs offers long-term tax benefits but could entail high initial tax costs for some corporations. Many corporate groups have chosen to operate in Puerto Rico through CFCs, possibly to take advantage of the federal tax deferral on income earned there. Some may have rejected this choice because their possessions subsidiaries owned valuable intangible assets, such as drug patents or food recipes, and the transfer of these assets to a non-U.S. entity, such as a CFC, could have been treated as a taxable exchange, possibly resulting in a substantial, one-time tax liability. Affiliated groups can avoid this tax if they keep the intangible assets in their U.S. firms, rather than transferring them to their new CFCs. However, in order for those CFCs to use those intangibles in their production processes, they must pay royalties to the U.S. owners and those royalties would be subject to federal income tax. IRS officials have expressed concern that the repeal of section 936 has not had its intended effect. Congress repealed section 936 because it was viewed as providing an overly generous tax benefit to taxpayers with operations in Puerto Rico. However, IRS officials believe that despite the repeal of section 936, many taxpayers with operations in Puerto Rico could be incurring approximately the same or even lower tax liabilities than they did under section 936 by restructuring their activities through CFCs. Taxpayers who converted into CFCs may have avoided the tax consequences typically associated with such a conversion, namely, tax liabilities arising from the transfer of intangibles from possessions corporations to CFCs or a significant increase in royalty payments from Puerto Rico. One private sector tax expert familiar with the practices of U.S. businesses operating in Puerto Rico could not recall any case in which a taxpayer reported a transfer of intangibles of any significant value from a possessions corporation to a CFC. The expert also told us that the reason why the IRS has not seen a notable increase in royalty payments from CFCs to U.S. firms holding intangibles is that, well before the expiration of the possessions tax credit, corporate groups had their existing or newly formed CFCs enter into research cost-sharing arrangements with their possessions corporations so that they would be codevelopers of new intangibles and, thereby, would have certain ownership rights to use the technology without paying royalties. The groups also tried to involve their CFCs as much as possible in the development of new products through other arrangements, such as research partnerships with unrelated technology-developing firms. A combination of tax return and economic census data indicate that the decline in income and value added of possessions corporations between 1997 and 2002 has been largely offset by an increase in the income and value added of affiliated corporations that left aggregate income and value added roughly constant. Although some evidence of a change in income- shifting behavior by these corporate groups makes it difficult to say how accurately trends in reported income and value-added data represent trends in actual economic activity in Puerto Rico, data on employment, capital expenditures, and total assets (which should not be distorted by income shifting) support the conclusion that a substantial amount of possessions corporation activity has been continued by other types of businesses. However, most of this continued activity is concentrated in the pharmaceutical industry and the decline in possessions corporation activity in other industries has not been offset. None of the data we present address the question of what corporate activity would have taken place during this period if the possessions tax credit had not been repealed. Tax return data on the affiliated corporate groups that have claimed almost all of the possessions tax credit indicate that between 1997 and 2001 at least a large portion (and possibly all) of the decline in reported incomes of possessions corporations operating in Puerto Rico was offset by increases in the reported incomes and total assets of affiliated corporations operating in Puerto Rico, particularly that of CFCs. The offset left the income that these groups earned in Puerto Rico roughly the same in 2001 as in 1997. This finding is consistent with data on value added in manufacturing from recent economic censuses of Puerto Rico. Gross profit, which equals income from sales minus the cost of goods sold, is the income measure from tax returns that is closest in definition to the value-added measure from census data that we presented earlier. Both of these measures may be distorted by income shifting, as we explain in the next section; however, value added is considered to be the best measure of the economic importance of manufacturing activity. We examined data for both of these measures, as well as other measures not distorted by income shifting, to assess the extent to which possessions corporation activity has been replaced by the activity of other types of businesses. Figure 33 shows that the aggregate gross profit of the possessions corporations in our 77 large groups peaked at $28.8 billion in 1997 and then fell to $11.4 billion by 2003. The figure also presents our “lower-bound” estimates for the amount of gross profits from Puerto Rico that CFCs reported. These estimates include only the profits of those CFCs for which we had Puerto Rican tax returns or that appeared to have operations only in Puerto Rico because those are the cases where we can be the most confident that our figures represent profits attributable only to Puerto Rican operations. The gross profits of those CFCs grew from $2.4 billion to $7.1 billion between 1997 and 2001. These estimates are likely to represent a lower bound for the amount of CFC profits in Puerto Rico because they do not include any of the profits for CFCs whose income was difficult to allocate between Puerto Rico and other locations. We present alternative estimates, labeled “CFC total if allocated by tax ratio,” of the gross profits from Puerto Rico of all of the CFCs in our large groups. These more comprehensive estimates are not likely to be very precise, but they are consistent with some of the census data that we present on CFCs in chapter 5. The estimates show CFC gross profits growing from $3.0 billion to $11.5 billion between 1997 and 2001. Finally, figure 33 also shows the gross profits reported on Puerto Rican tax returns by members of the 77 large groups, other than possessions corporations and CFCs. The gross profits of these businesses increased from $3.0 billion to $7.0 billion between 1999 and 2001. The data in figure 33 indicate that much of the $10.7 billion decline in the gross profits of possessions corporations between 1997 and 2001 was offset by increases in the profits of affiliated corporations. The lower-bound estimates for CFCs grew by $4.7 billion over that period, while the profits of the other affiliates, including LLCs, grew by $3.9 billion between 1999 and 2001. The combined profits of these two sets of businesses, therefore, grew by about $8.7 billion. If we use the “tax ratio” estimate for all CFCs, the combined growth in profits grew by about $12.5 billion. The gross profit of the “other affiliated” businesses is likely to be understated relative to those of the possessions corporations because of differences in the income definitions used for federal and Puerto Rican tax purposes. For those possessions corporations for which we had both federal and Puerto Rican returns, the gross profit from the Puerto Rican return averaged about 70 percent of the gross profit on the federal return. For this reason figure 33 may understate the extent to which the decline in possessions corporations’ Puerto Rican operations has been offset by these other affiliates. Data from recent economic censuses on value added in Puerto Rican manufacturing lend additional support to the conclusion that we draw from figure 33—that much, if not all, of the decline in income of possessions corporations in Puerto Rico between 1997 and 2001 was largely offset by increases in the incomes of other types of businesses. Figure 34 shows that valued added by possessions corporations in Puerto Rican manufacturing followed roughly the same pattern as the gross profits data presented in figure 33; it also shows that other types of businesses made up for approximately all of the possessions corporations’ decline between 1997 and 2002. The extent to which the decline in income and value added of possessions corporations was offset by the growth of their affiliates varied significantly by industry. Figure 35 decomposes the last two columns of figure 34 into the chemical industry (which includes pharmaceuticals) and all other manufacturing industries. It shows that a significant drop in the value added of possessions corporations in the chemical industry was more than offset by the substantial growth in value added by other types of businesses. In contrast, the value added of both possessions corporations and all other types of businesses declined between 1997 and 2002 in the remainder of the manufacturing sector, outside of chemicals. Our tax data for large corporate groups showed similar variation across industries. The corporate groups in the chemicals and medical equipment industry group offset a larger proportion of the decline in the income of their possessions corporations between 1997 and 2002 with income from other types of affiliates operating in Puerto Rico than was the case for large corporate groups as a whole. Trends in the income of possessions corporations in the other two industrial groupings that we are able to present with our tax data—computer, electronics, and electrical equipment; and food and kindred products—were somewhat erratic between 1993 and 2001 before declining by 2003. There was negligible to no growth in the incomes of CFCs and other types of businesses in these two industrial groupings during the period we could observe between 1997 and 2002. (See tables 17 and 18 in app. IV.) As we explained in chapter 3, the data on income and value added for members of large corporate groups operating in Puerto Rico may be distorted by changes in the income reporting practices of these groups during the late 1990s. For this reason it is difficult to know how accurately trends in reported income and value added represent trends in actual economic activity in Puerto Rico. Nevertheless, data on capital expenditures, total assets, and employment (which should not be distorted by income shifting) support the conclusion that a substantial amount of possessions corporation activity has been continued by other types of businesses. Much of this continued activity is concentrated in the chemical industry, which is dominated by pharmaceutical producers. The economic census data on capital expenditures on manufacturing plant and equipment in figure 36 show that this investment increased dramatically between 1997 and 2002 after having dropped from 1992 to 1997. We cannot divide this time series of capital spending data between possessions corporations and other forms of business; however, figure 36 shows that most of the spending increase was in the pharmaceutical industry, which was the source of about two-thirds of total possessions corporations profits in 1997. Consequently, it appears that any overall decline in possessions corporations’ capital spending that may have occurred since 1997 must have been more than offset by the investment of other businesses. The tax data for our 77 large corporate groups show that the $12.1 billion decline in the total assets of the possessions corporations in these groups between 1997 and 2001 was largely offset by an increase of at least $9.4 billion in the total assets of affiliated corporations operating in Puerto Rico (see table 15 in app. IV). The decline in assets may have been more than fully offset, depending on the growth in the Puerto Rican assets of the CFCs that we were not able to include in our estimates. However, as was the case with income and value added, there were significant differences across industries behind the trends for the manufacturing sector as a whole. The decline in assets of possessions corporations in the chemical and medical equipment industries between 1997 and 2001 was more than offset by the increased assets of their affiliates even if we use just our lower-bound estimates for CFCs. In comparison, a little over half of the decline in possessions corporations’ assets in the computer, electronics, and electrical equipment industries between 1997 and 2001 was offset by the growth in affiliated CFCs’ assets. (See tables 16 and 17 in app. IV.) The economic census data on employment in Puerto Rico’s manufacturing sector in figure 37 shows that the decline in employment by possessions corporations between 1997 and 2002 was not as drastic as the declines in their profits or value added over that period (shown previously in figs. 33 and 34); however, there was no offsetting increase in overall employment by other types of manufacturing firms. Figure 38, which decomposes the last two columns of figure 37 into the chemical industry and all other industries, shows that employment by possessions corporations in the chemical industry did, in fact, fall sharply between 1997 and 2002, but other types of businesses in the industry more than made up for that decline. In the remaining industries as a whole, there was a smaller percentage decrease in employment by possessions corporations but there was also a decrease, rather than an offsetting increase, in the employment by other types of businesses. The chemical industry is much less important in terms of overall employment in manufacturing than it is in terms of value added. For this reason the continued strength of that industry was not enough to prevent an overall decline in manufacturing employment. U.S.-owned businesses accounted for at least 71 percent of value added and at least 54 percent of employment in Puerto Rico’s manufacturing sector in 2002. CFCs produced most of this value added but possessions corporations still accounted for most of the employment by U.S. firms. The CFCs are particularly important in the pharmaceutical industry and much less so in other manufacturing industries. U.S. corporations appear to account for less than 25 percent of employment in Puerto Rico’s wholesale and retail trade sectors, where local corporations are the most important employers. Similarly, U.S.-owned corporations are not the majority employers in any of the large Puerto Rican service industries for which data are available. As of 2002, U.S. CFCs accounted for 42 percent of value added in Puerto Rico’s manufacturing sector—a larger share than that of any other type of business entity (see fig. 39). Possessions corporations had the next largest share of value added with 27 percent, and other U.S. corporations accounted for 2 percent of the total. Together, these three types of businesses produced at least 71 percent of total manufacturing value added. A small number of U.S.-owned or U.S.-incorporated businesses may be included in the category “corporations of type unknown,” but we believe that most of the data for that category (in all of the figures in this chapter) are attributable to corporations that are not incorporated in the United States and are not CFCs. Possessions corporations remained the largest single type of employer, with 31 percent of the sector’s total employment (see fig. 40). Despite their large share of manufacturing value added, CFCs had a relatively small share—14 percent—of the sector’s total employment, which resulted in the extraordinarily high ratios of value added per employee that we discussed earlier. In contrast, other U.S. corporations and corporations incorporated in Puerto Rico had significantly larger shares of total employment than they did of value added. A little less than two-thirds of the CFCs’ value added and half of their employment is attributable to CFCs incorporated outside of Puerto Rico. This distribution of value added is similar to the estimated distribution of gross profit between the two types of CFCs, based on the tax data for our 77 large corporate groups for 2001. The estimates presented in figure 41 are based on our tax ratio approach for attributing portions of the income of multilocation CFCs to Puerto Rico. The estimates indicate that 70 percent of the gross profit and 73 percent of net income that CFCs earned in Puerto Rico in 2001 were earned by CFCs incorporated outside of Puerto Rico. Using the tax data, we estimate that more than three-quarters of the total gross and net income earned by the CFCs incorporated outside of Puerto Rico in 2001 is attributable to CFCs incorporated in the Cayman Islands, Ireland, the Netherlands, and the U.S. Virgin Islands. A comparison of figures 42 and 43 shows that the value added of CFCs in 2002 was concentrated in the pharmaceutical industry. These firms accounted for over half of the value added in that industry, or almost three times as much as the value added of possessions corporations. In contrast, CFCs accounted for only 13 percent of the value added in all of the remaining manufacturing sectors, where possessions corporations still dominated with a 48 percent share. At this more specific industry level of data, Census nondisclosure rules prevent us from providing as much detail about other forms of businesses. We needed to add pass-through entities into the “all other and unknown” category. However, from table 20 in appendix V, we do know that between approximately 80 percent and 90 percent of the employees of these entities were concentrated in two industries—pharmaceuticals and medical equipment—and that between 25 percent and 63 percent of these employees were in each of these industries. If the value added of these entities was distributed across industries in approximately the same manner as their employment, then pass-through entities would have accounted for between 3 percent and 7 percent of value added in pharmaceuticals. Data in table 20 of appendix V show that possessions corporations and CFCs were approximately equal in importance in terms of employment in the pharmaceutical industry in 2002 and, together, they accounted for 61 percent of the industry’s employment. The data also show that possessions accounted for a little over a quarter of total employment in all other manufacturing industries, while CFCs accounted for only 9 percent. Corporations that were U.S. CFCs and businesses incorporated in the United States accounted for less than a quarter of total employment in the Puerto Rican wholesale trade sector and, as figure 44 shows, about half of their employment was in corporations other than CFCs or possessions corporations. Corporations in the unknown category, which we believe to be largely ones that are not incorporated in the United States or owned by U.S. parent corporations were by far the largest employers in the wholesale trade in 2002, as shown in figure 44. Figure 45 indicates that this employment distribution was similar for the retail trade sector. The primary difference between the two sectors is that possessions corporations played no role at all in retail trade and sole proprietors played a more important role in that sector than in wholesale trade. The distributions of payroll across entities in these two sectors largely mirrors the distributions of employment (see table 17 in app. V). In general, possessions corporations and CFCs played minor roles as employers in Puerto Rico’s service sector. The 2002 Economic Census of Island Areas compiled data for 11 service industries, as well as the mining, utilities, and transportation and warehousing sectors in Puerto Rico. Table 7 shows the distribution of employment across types of businesses for the six largest services (in terms of employment) covered by the census. Appendix V tables 25–27 show the distribution of employment, sales, and payroll, for all 11 service industries and the three other sectors. CFCs accounted for 32.7 percent of employment in the information services industry (which includes telecommunications, broadcasting, publishing, motion pictures, and Internet services), but for no more than 5.1 percent in any of the other five large services. Possessions corporations accounted for 10 percent of employment in the accommodations industry but for no more than 2.4 percent in any of the other large services. Other U.S. corporations accounted for between 10 percent and 20 percent of employment in each of the six services. Most of the remaining employment in the large service industry is attributable to local corporations (in the type unknown group) and sole proprietors. The category “all other employers,” which includes nonprofit entities, accounts for up to 22 percent of total employment in healthcare services, which is the largest service industry. The taxes paid to all levels of government (federal, Commonwealth, and local) in Puerto Rico in 2002 were $3,071 per capita—considerably less than the per capita taxes of $9,426 paid in the states. However, the combined taxes paid by Puerto Rico residents amounted to 28 percent of their personal income, which was close to the 30 percent figure in the states. Puerto Rico’s outstanding government debt in 2002 was much higher than that of state and local governments as a share of personal income, partly because the Commonwealth government has a wider range of responsibilities. The amount of taxes that Puerto Rico residents paid per capita in fiscal year 2002 ($3,071) was about one-third of the amount paid by residents of the states ($9,426) (see fig. 46). The mix of the taxes was also quite different. While nearly 60 percent ($5,619) of the taxes paid by residents of the states were federal taxes, only about 25 percent ($760) of the total taxes paid by Puerto Rico residents were federal taxes because those residents generally are not subject to federal income tax on the income they earn in Puerto Rico. Data on federal taxes paid in the other insular areas are not available. Taxes paid by residents of the other insular areas to their own governments in 2002 amounted to $2,451 per capita—slightly higher than the $2,310 per capita that residents of Puerto Rico paid to the Commonwealth and municipal governments. The location where a tax is paid is not necessarily the same location as where the economic burden of the tax falls. The data we present in this chapter pertain to the former. Comparing the taxes Puerto Rico residents paid to the average of the five states whose residents paid the least total taxes, we found that Puerto Rico residents paid about 54 percent of the amount paid by these state residents ($5,713). The average percentage of taxes paid in these same five states that were federal taxes was nearly 47 percent ($2,705), still nearly double the percentage for Puerto Rico. The average per capita amount of taxes paid in the five highest tax states was $15,491—five times the per capita tax in Puerto Rico. Taxes as a share of personal income are about the same in Puerto Rico and the states, which is not surprising because Puerto Rico’s income per capita is so much lower. Taxes paid in Puerto Rico amounted to 28 percent of the Commonwealth’s personal income, while those paid in the states amounted to 30 percent of aggregate state personal income. Taxes in the five lowest- tax states were an average of 23 percent of the states’ aggregate personal income, while those in the five highest-tax states averaged 39 percent. (See table 28 in app. VI for additional detail.) As shown in figure 48, about 75 percent of the taxes paid in Puerto Rico are levied by the Commonwealth and municipal governments. The property tax and gross receipts tax imposed by the municipal government accounted for a little over 17 percent of taxes paid with the remainder going to the Commonwealth government. Commonwealth income taxes accounted for 41 percent of total taxes with slightly more than half of that being paid by resident individuals. Sales and excise taxes represented 23 percent of the total. Data available from IRS for Puerto Rico and the states do not separate federal individual income tax payments from payments of federal employment taxes, such as those for Social Security, Medicare, and unemployment compensation; however, most of the tax shown for that combined category in figure 48 should be employment taxes because most residents of Puerto Rico pay little, if any federal income tax. Even less federal estate, gift, or excise tax is paid in Puerto Rico. Federal excise taxes on goods manufactured in Puerto Rico and sold in the states are transferred to the Commonwealth and more than offset any federal excise tax on products consumed there. Federal individual income and employment The figures for federal estate and gift taxes round to 0 percent. In contrast to the case of Puerto Rico, more than half of the taxes paid in the states go to the federal government, which provides a larger range of services to the states than it does to the Commonwealth. Federal individual income and employment taxes accounted for 56 percent of the taxes paid, while federal estate, gift, and excise taxes amounted to an additional 3 percent, resulting in a combined federal share of 59 percent (see fig. 49). When the 10 percent of taxes paid in the form of state and local income taxes are added to the 56 percent that go to federal individual income and employment taxes, the resulting 66 percent share is almost equal to the 67 percent share in Puerto Rico for this same group of taxes. Of the remaining total, state and local property taxes and “other” revenues (including lotteries and licenses) account for greater shares of the total taxes paid in the state than they do in Puerto Rico, while sales and excise taxes represent a smaller share. The amount of Puerto Rican government-issued debt outstanding as of 2002 was slightly higher in per capita terms, but much higher as a share of personal income, than was state and local government-issued debt. As shown in figure 50, the outstanding amount of Puerto Rican government debt per capita in 2002 was about $7,580, compared to a national average of $5,820 for state and local government-issued debt. The per capita debt of the governments of the other insular areas in 2002 was about $5,690. Although all of this debt was issued by the respective governments, some of it is directed to private use and will be paid back by targeted beneficiaries. About 16 percent of Puerto Rico’s government debt fell into this “private use” category, compared to about 23 percent for state and local government debt. The states and insular areas receive funds from the federal government in the form of grants, direct aid, loans, and insurance and procurement payments (see table 8). Federal grants and payments to the Puerto Rican government in 2002 amounted to $1,242 per capita, about the same as the $1,264 per capita paid to all state and local governments in the states, but less than the $1,703 per capita paid to the other insular area governments. The $2,057 per capita of direct federal payments to individuals in Puerto Rico was well below the $3,648 per capita paid to state residents, but higher than the $1,418 per capita paid to residents of the other insular areas. The following chapter and appendix VII provide detailed information on the amount of spending for specific federal social programs in Puerto Rico, the states, and other insular areas and describes similarities and differences in the operation of these programs in the various locations. The per capita federal payments of $336 for salaries, wages, and procurement in Puerto Rico were about 20 percent of payments for those purposes in the states and the other insular areas. (Page is left blank intentionally.) Some federal funds that Puerto Rico received as grants and direct payments were in the form of a rebate on custom duties and a cover over of excise taxes collected on rum. These funding sources are not available to the states or the District of Columbia, or most of the insular areas except for the U.S. Virgin Islands. On a per capita basis the U.S. Virgin Islands received a larger rebate payment than Puerto Rico and a larger cover over payment than Puerto Rico (see table 9). Like the states, Puerto Rico and the other U.S. insular areas receive federal funds for a variety of social programs—including federal housing assistance, education, and health care financing programs—which provide assistance to elderly and needy families and individuals. Generally, the social programs we examined in these areas targeted similar populations and delivered similar services—although Puerto Rico and the other insular areas did not always do so through the program as it exists in the states (see table 10). For example, in lieu of the Food Stamp Program available in the states, which is an entitlement program based on the number of participants, Puerto Rico receives a capped block grant that has similar eligibility requirements. The major difference between some of the social programs we examined in the states versus those in Puerto Rico and the other insular areas is how they are funded. For example, where federal Medicaid spending is an open-ended entitlement to the states, it is subject to a statutory cap and a limited matching rate in Puerto Rico and the other insular areas. Some of the social programs and housing programs that we examined are available in the states, but are not available in some of the insular areas. More detailed information on how each of the programs is applied in the insular areas and the states can be found in appendix VII.
The federal possessions tax credit, which was designed to encourage U.S. corporate investment in Puerto Rico and other insular areas, expires this year. Proponents of continued federal economic assistance to Puerto Rico have presented a variety of proposals for congressional consideration. In response to a request from the U.S. Senate Committee on Finance, this study compares trends in Puerto Rico's principal economic indicators with those for the United States; reports on changes in the activities and tax status of the corporations that have claimed the possessions tax credit; explains how fiscal relations between the federal government and Puerto Rico differs from the federal government's relations with the states and other insular areas; and compares the taxes paid to all levels of government by residents of Puerto Rico, the states, and other insular areas. GAO used the latest data available from multiple federal and Puerto Rican government agencies. Data limitations are noted where relevant. Key findings are based on multiple measures from different sources. GAO is not making any recommendations in this report. In comments on this report the Governor of Puerto Rico said the report will be useful for evaluating policy options. Puerto Rico's per capita gross domestic product (GDP, a broad measure of income earned within the Commonwealth) in 2005 was a little over half of that for the United States. Puerto Rico's per capita gross national product (GNP, which covers income earned only by residents of the Commonwealth) was even lower relative to the United States. Concerns about Puerto Rico's official price indexes make it difficult to say whether the per capita GNP of Puerto Rican residents has grown more rapidly than that of U.S. residents; however, the absolute gap between the two has increased. U.S. corporations claiming the possessions tax credit dominated Puerto Rico's manufacturing sector into the late 1990s. After the tax credit was repealed in 1996 beginning a 10-year phaseout period, the activity of these corporations decreased significantly. Between 1997 and 2002 (the latest data available) valued added in these corporations decreased by about two-thirds. A variety of data indicates that much of this decline was offset by growth in other corporations, so that some measures of aggregate activity remained close to their 1997 levels. For example, value added in manufacturing remained fairly constant between 1997 and 2002. Most of the offsetting growth was in the pharmaceutical industry. Residents of Puerto Rico pay considerably less total tax per capita than U.S. residents. However, because of lower incomes they pay about the same percentage of their personal income in taxes. The composition of taxes differed between Puerto Rico and the states with federal taxes being a larger share of the total in the states. This difference reflects the facts that (1) residents of Puerto Rico generally do not pay federal income tax on income they earn in the Commonwealth and (2) the Commonwealth government has a wider range of responsibilities than do U.S. state and local governments.
The Small Business Jobs Act of 2010 defines qualified small business lending—as defined by and reported in an institution’s quarterly regulatory filings, also known as Call Reports—as one of the following: owner-occupied nonfarm, nonresidential real-estate loans; commercial and industrial loans; loans to finance agricultural production and other loans to farmers; and loans secured by farmland. In addition, qualifying small business loans cannot be for an original amount of more than $10 million, and the business may not have more than $50 million in revenue. The act specifically prohibits Treasury from accepting applications from institutions that are on FDIC’s problem bank list or have been removed from that list during the previous 90 days. The initial baseline small business lending amount for the SBLF program was the average amount of qualified small business lending that was outstanding for the four full quarters ending on June 30, 2010, and the dividend or interest rates paid by an institution are adjusted by comparing future lending against this baseline. Also, the institution is required to report any loans resulting from purchases, mergers, and acquisitions so that its qualified small business lending baseline is adjusted accordingly. Fewer institutions applied to SBLF than were initially anticipated, in part because many banks did not anticipate that demand for small business loans would increase. The institutions that applied to and were funded by SBLF were primarily institutions with total assets of less than $500 million. In addition, in our 2011 report, we found that the lack of clarity by Treasury in explaining the program’s requirements created confusion among applicants and that Treasury faced multiple delays in implementing the SBLF program and disbursing SBLF funds by the statutory deadline of September 27, 2011. We recommended Treasury apply lessons learned from the application review phase of SBLF to help improve its communications with SBLF participants and other interested stakeholders, such as Congress and bank regulators. In 2012, in response to our recommendation, Treasury officials said that they had enhanced their communication strategy with SBLF participants and stakeholders and developed written communication guidelines to provide for consistency, continuity, and validity. The amount of funding an institution received under the SBLF program depended on its asset size as of the end of the fourth quarter of calendar year 2009. Specifically, if the qualifying financial institution had total assets of $1 billion or less, it was eligible for SBLF funding that equaled up to 5 percent of its risk-weighted assets. If the qualifying institution had assets of more than $1 billion but less than $10 billion, it was eligible for funding that equaled up to 3 percent of its risk-weighted assets. In the case of bank or thrift holding companies, assets were to be measured based on the total combined assets of the insured depository institution subsidiaries and risk-weighted assets were to be measured based on the combined risk-weighted assets of the insured depository institution subsidiaries. The SBLF program provided an option for eligible institutions to refinance preferred stock or subordinated debt issued to Treasury through CPP. At the time of application, the institution was required to submit a small business lending plan to its regulator describing how the applicant’s business strategy and operating goals will allow it to address the needs of small businesses in the area it serves. Participating SBLF C-corporation banks and bank holding companies pay dividends of up to 5 percent per year initially to Treasury, with reduced rates available if they increase their small business lending. The initial dividend rate is based on the difference between the baseline level and the lending reported in the second calendar quarter preceding the SBLF closing date. Additionally, the dividend rate payable decreases quarterly as banks increase small business lending over their baselines. While the dividend rate was no more than 5 percent for the first 9 quarters (a little over 2 years), a bank could reduce the rate to 1 percent by generating a 10 percent increase in its lending to small businesses compared with its baseline. After 9 quarters, the dividend rate on the capital became fixed at the rate the participating banks were paying at that time if they had increased their small business lending; otherwise, the dividend rate increased to 7 percent if participating banks had not increased their small business lending. After 4.5 years, the dividend rate on the capital increases to 9 percent for all banks regardless of a bank’s small business lending. For S-corporations and mutual institutions, the initial interest rate was at most 7.7 percent. The rate fell as low as 1.5 percent for the institutions that increased their small business lending by 10 percent or more from the previous quarter. For CDLFs, the initial dividend rate is 2 percent for the first 8 years. After 8 years, the rate increases to 9 percent if the CDLF has not repaid the SBLF funding. This structure is designed to encourage CDLFs to repay the capital investment by the end of the 8- year period. Treasury allows an SBLF participant to exit the program at any time, with the approval of its regulator, by repaying the funding provided along with dividends owed for that period. Under the act, Treasury has a number of reporting requirements to Congress related to SBLF: (1) monthly reports describing all of the transactions made under the program during the reporting period; (2) a semiannual report (for the periods ending each March and September) providing all projected costs and liabilities and all operating expenses; and (3) a quarterly report, known as the Lending Growth Report, detailing how participants have used the funds they have received under the program. SBLF participants had increased lending over baseline levels as of June 30, 2013, according to Treasury’s Lending Growth Report. Total qualified small business lending for SBLF participants—banks and CDLFs—increased by almost $10.4 billion over their aggregate baseline of about $36.5 billion. Bank participants increased their qualified small business lending by about $10.1 billion over a baseline of about $35.7 billion. CDLFs increased their qualified small business lending by about $256.3 million over a baseline of about $796.8 million. Of the 265 participating banks, 246 (93 percent) increased their qualified small business lending, and 45 of the 50 CDLFs (90 percent) increased their qualified small business lending. SBLF participants had made about $188 million in dividend or interest payments to Treasury as of June 30, 2013—$185 million from banks and $3 million from CDLFs. As of June 30, 2013, SBLF participants had not missed any payments. Figure 1 shows the numbers of program participants in different dividend or interest rate categories. As of July 1, 2013, 16 participants with aggregate investments of $147.3 million had fully redeemed Treasury’s investment. The structure of the SBLF program is designed to encourage banks to repay the capital investment by the end of 4.5 years and CDLFs to repay at the end of 8 years. However, Treasury allows an SBLF participant to exit the program at any time. SBLF securities may be redeemed at any time subject to the submission of a formal request to Treasury. Treasury reviews the notice to determine whether it meets a number of conditions. The redemption amount must equal at least 25 percent of the original funding balance. The redemption date must be no more than 60 days and no fewer than 30 days from the date the notice is sent to Treasury. Payment of accrued dividends or interest for the current dividend period must be made in addition to the principal balance. Because exiting the SBLF program would affect a participant’s capital, banks that want to leave the program also need approval from their appropriate federal banking regulator. If the SBLF participant is a bank or thrift, upon receipt of a redemption notice, Treasury notifies the appropriate federal banking regulator to determine if it has any objection. The federal banking regulator then reviews the redemption request to analyze the impact of the redemption of SBLF securities on capital levels. The analysis bank regulators perform for SBLF redemptions is similar to the analysis they would undertake for other banks and thrifts engaging in activities that would reduce their capital. In general, federal banking regulators assess whether banks will have sufficient capital after redeeming their SBLF securities and whether the banks have the financial ability or capital strength to exit the SBLF program. The decision is based in part on the bank’s condition and the bank’s capital planning process, including performance and leverage ratios, classified assets, nonperforming loans, allowance for loan and lease losses, outstanding enforcement actions, and other factors. If the federal banking regulator does not object to the redemption, Treasury proceeds with the redemption after verifying that they meet requirements for redemption. SBLF participants exited the program for a variety of reasons. Some participants shared with Treasury the reasons for leaving the program, though they are not required to do so. Four participants told Treasury that they wanted to redeem their SBLF securities prior to an acquisition by another bank. Another participant told Treasury it was redeeming SBLF securities due to a difficult economic environment and the potential impact of revised regulatory capital guidelines. Another participant said it wanted to eliminate ongoing dividend expenses and compliance costs associated with the SBLF program requirements. For those participants that did not provide a reason to Treasury, we contacted 10 former participants that fully repaid their SBLF funds to ask them their reasons for leaving the program. We received responses from nine participants and some gave multiple reasons for leaving. For example, eight participants told us that they exited the program to avoid dividend expenses and said that they could obtain less expensive funding elsewhere. Two participants told us that reporting and compliance were burdensome, and officials from two banks said that demand for small business loans was not as strong as they had expected. Factors that help to explain the variation in qualified small business lending across SBLF banks include the condition of banks’ loan portfolios, amount of net capital received from SBLF, and demand for credit, among others. In addition, our analysis of survey data indicates that credit is still difficult to obtain, although it has eased some compared with 2009. Further, while some banks confirmed to us that demand for credit has improved, some also said they face several challenges in increasing their qualified small business loan portfolios. To determine reasons for variation in small business lending among SBLF banks, we divided banks into four quartiles based on their level of qualified small business lending. We then analyzed financial conditions for each quartile to determine if differences in qualified small business lending could be explained in part by differences in financial conditions across the four quartiles. We also performed a regression analysis to simultaneously control for multiple aspects of banks’ financial condition, as well as to attempt to account for state-level economic conditions. As shown in figure 2, SBLF banks in the first quartile increased qualified small business lending 114.1 percent over their baseline, compared to 47.6 percent for banks in the second quartile, 25 percent for banks in the third quartile, and 9.1 percent for banks in the fourth quartile, as of June 30, 2013. Our analysis of the financial condition of SBLF banks found that banks in the quartiles with lower levels of qualified small business lending had more troubled loans compared to those in the quartiles with higher lending levels, which may have negatively affected their ability to lend. As illustrated in figure 3, as of June 30, 2013, the median Texas Ratios increased across the four quartiles, with the first quartile having the lowest median Texas Ratio. The Texas Ratio can indicate a bank’s likelihood of failure by comparing its troubled loans to its capital. The higher the ratio, the more likely the institution is to fail because more of its capital could be eroded by realized losses on these troubled loans. We also found the Texas Ratio was higher for all banks prior to SBLF funding, which is to be expected because capital received under SBLF would tend to decrease the bank’s Texas Ratio. In addition, we also reviewed CAMELS composite ratings for each quartile as another factor that may have contributed to variation in lending but found minimal differences across the quartiles. Institutions were generally satisfactorily rated when approved for the program in 2011 because institutions with a CAMELS rating of either “4” or “5” were not allowed to participate in the program. On average, banks across the four quartiles as of March 31, 2011, and June 30, 2013, had similar median and average CAMELS ratings, ranging from 1.81 to 2.0. See appendix I for other indicators of financial health we analyzed, including measures of equity, asset liquidity, reliance on wholesale funding, and troubled loans. Another factor associated with a bank’s ability to increase qualified small business lending in the program is the amount of net SBLF capital received. Our analysis revealed that banks in the third and fourth quartiles had the lowest amount of net SBLF capital received. This trend can be at least partly explained by the fact that 67 percent of banks in the third quartile and 60 percent of banks in the fourth quartile were CPP participants and may have received only enough SBLF capital to repay CPP funds (see fig. 4). We identified several cases where banks in the third and fourth quartiles that converted through CPP either did not receive enough SBLF capital to repay CPP or received just enough funding to break even. Hence, these banks did not receive any net SBLF capital to support increased qualified small business lending. For all banks, we found a significant difference among the quartiles in net capital received as a percentage of assets, as of March 31, 2011 (see fig. 5). Specifically, for the median bank in the first quartile, SBLF net capital represented 2.9 percent of total assets compared to 2.2 percent for the median bank in the second quartile, 1.3 percent for the median bank in the third quartile, and 0.87 percent for the median bank in the fourth quartile. These differences may help explain the lower lending of banks in the fourth quartile, which our analysis found to have the highest median asset size. These banks had less net capital to support increased qualified small business lending because they were more likely to have converted through CPP. Although SBLF banks may not have received a large amount of additional capital to increase lending, the capital these institutions received through CPP was also intended to increase lending. Our regression analysis indicates that lower growth in qualified small business lending was associated with more troubled loans, greater leverage, less net capital received from SBLF, and lower reported demand for small business loans. After controlling for net capital received, CPP status was no longer a statistically significant determinant of qualified small business lending growth. This result suggests that the lower lending at CPP institutions was driven in part by lower net capital rather than some other factor associated with prior CPP status that we did not observe. Our measure of state-level economic conditions—Gross Domestic Product growth—did not provide additional insight into any local economic conditions that might be driving differences in qualified small business lending. However, a participant-reported measure of demand for small business loans that was included in Treasury’s survey of SBLF participants was a significant predictor of growth. The magnitude of these various factors is summarized as follows: For every 14 percentage point increase in troubled loans (roughly one standard deviation) relative to capital and reserves, qualified small business lending growth decreased by 13 percentage points. For every 2 percentage points (roughly one standard deviation) more capital relative to assets (less leverage) prior to receiving SBLF capital, participants increased qualified small business lending by an additional 13 percentage points. A participant that received net SBLF capital of 1.9 percent (the mean) of assets relative to a participant that received no net capital (a potential outcome for participants that converted through CPP) increased qualified small business lending by an additional 18 percentage points. Participants that reported stronger demand for small business loans relative to those that reported weaker demand for small business loans increased qualified small business lending by an additional 24 percentage points. The SBLF program was designed to improve small businesses’ access to credit, which had become difficult to obtain after the onset of the 2007- 2009 recession. Based on our review of survey data and interviews with officials from 10 SBLF banks, factors that continue to affect small business lending include credit conditions and demand for small business loans, among others. Our analysis indicates that credit is still difficult to obtain, confirming that the lending environment remains challenging, although it has eased some compared with 2009. For example, according to the Wells Fargo/Gallup Small Business Index, as of the third quarter of 2013, 25 percent of businesses reported difficulty obtaining credit when they needed it compared to 33 percent in the third quarter of 2009. Similarly, the Federal Reserve Senior Loan Officer Opinion Survey on Bank Lending Practices in July 2013 showed some large domestic banks having eased their credit standards. Specifically, 90 percent of respondents reported that credit standards to small firms remained basically unchanged, and 10 percent reported that credit standards to small firms had eased somewhat. Some banks surveyed cited more aggressive competition from other banks or nonbank lenders as an important reason for easing standards or terms on loans. The conditions reported in the July 2013 survey improved from those reported in July 2009, when banks indicated they had continued to tighten standards on all major loan types. Finally, the percentage of businesses whose borrowing needs were not satisfied improved from 10 percent in June 2009 to 5 percent in June 2013, while the percentage of businesses whose borrowing needs were satisfied was largely unchanged from June 2009 to June 2013 according to the Small Business Economic Trends survey from the National Federation of Independent Business (NFIB). Banks generally report stronger demand for credit from small businesses compared to 2009 conditions, which could help explain why SBLF banks could increase their lending, but weaknesses exist. For example, as of July 2013, 61.4 percent of respondents of the Senior Loan Officer Opinion Survey reported that demand for commercial and industrial loans for small firms remained unchanged, and 30 percent reported moderately stronger demand for these loans. Conversely, as of July 2009, 34 percent of respondents reported that demand for commercial and industrial loans for small firms remained unchanged, and 52.8 percent reported moderately weaker demand (5.7 percent reported moderately stronger demand). Based on our analysis of SBLF banks and the results of Treasury’s SBLF annual survey, banks with lower levels of qualified small business lending were more likely to report weaker demand (see fig. 6). Overall, 46 percent of respondents to Treasury’s survey reported stronger demand for credit compared to 14 percent reporting weaker demand. Respondents also reported a net increase in the number of inquiries from small business borrowers regarding the availability and terms of lending. A recent Federal Reserve study noted many of the same factors have affected small businesses’ access to credit since the onset of the 2007- 2009 recession. Specifically, the study reflected that banks noted small business owners are not expanding as a result of weak sales and earnings, among other factors. However, banks also viewed small businesses as less creditworthy because small businesses do not have the necessary collateral or cash flows at the same time that banks have increased their credit standards. We spoke with officials from 10 SBLF banks representing a range of success in increasing qualified small business lending, and they cited several reasons that they have had some success in increasing small business lending, as well as some challenges. Officials from five banks we interviewed across the quartiles attributed their lending growth to their ability to expand into new markets and hire additional loan officers. In addition, two banks cited strong loan growth in the agricultural, energy, and commercial and industrial sectors. Further, officials from some banks reported their customers are generally in better condition than they were a few years ago, but are uncomfortable taking on additional debt given the continued uncertainty around economic conditions and interest and tax rates. Officials from other banks cited difficulties based on their inability to compete with interest rates from larger banks and their local counterparts. Specifically, officials from four banks told us that while there are more creditworthy borrowers, there are more banks competing for these same borrowers. We also heard from officials at one bank that some small businesses in the area are less creditworthy than they used to be as a result of highly leveraged owner-occupied commercial real estate. Officials from the banks we interviewed that participated in CPP told us that they did so to obtain better dividend rates on the equity they received through SBLF. One prior CPP participant told us that the bank’s shareholders had a negative opinion of their participation in CPP, but had not raised any concerns since the bank had redeemed its shares through SBLF. Treasury has taken steps to assess SBLF participants’ lending patterns, including collecting additional performance data and adopting additional analytical methods. Although actions taken to date do not represent an impact evaluation of SBLF, which would include analysis that isolates the impact of SBLF from other factors that affect small business lending, Treasury officials noted that they are exploring options to evaluate SBLF impact and are working with a contractor with expertise on statistical evaluation techniques. However, Treasury has not produced a written plan for completing the evaluation. By conducting an impact evaluation, Treasury could more effectively inform the public and Congress on how SBLF has affected the participants’ lending compared to other factors, such as economic conditions, and could better assist Congress in making future decisions about similar capital investment programs or alternatives to such programs that support small businesses’ access to capital. Treasury has taken steps to assess SBLF program performance (participants’ lending patterns) by including a peer-group analysis in the Lending Growth Reports and collecting additional performance information through annual surveys of program participants. In our December 2011 SBLF report, we recommended that Treasury finalize plans for assessing SBLF program performance. Treasury explored different comparison methods in its Lending Growth Reports and in its January 2013 report added a peer-group comparison that mirrored the characteristics of SBLF participants more closely than the comparison group alone that Treasury had used in previous reports. Treasury officials stated that they have considered additional options to evaluate the performance of SBLF and have included some of these in the Lending Growth Report. For example, they have included information on increases in lending following investment, increases in lending by financial condition, and additional loans based on annual survey results. However, they explored additional analyses using the peer-group comparison to review loan activity at the local level using census data but found this finer segmentation of loan data not available. In addition, Treasury has collected additional performance information through an annual lending survey of SBLF participants. In 2012, Treasury initiated an annual survey of SBLF program participants and analyzed the survey results to obtain additional quantitative and qualitative data on how the program is performing. Treasury required that all active SBLF participants respond to the survey as part of the agreement to participate in the program. Treasury officials explained that while they collect information on the dollar amount of qualified small business loans at the end of each quarter, they intended to use the annual survey to supplement the quarterly data—such as by gathering additional information on the volume of the loans originated over time—and to provide additional information on how the participants are using SBLF funds. The survey covered several topics for the period from July 1, 2011, to June 30, 2012: changes in participants’ small business lending standards and demand; obstacles to increasing small business lending; actual increases in small business lending by industry sector, number, actions associated with use of SBLF funds, including projected total increase in small business lending over 2 years; and small business outreach activities required by the Small Business Jobs Act of 2010. After receiving the survey responses from SBLF participants, Treasury analyzed the results of the SBLF annual survey and reported the aggregate results in June 2013. The survey results report (1) provides Treasury’s analysis of the aggregate survey results; (2) includes the results of each survey question; (3) discusses the methods Treasury followed to validate and review individual responses to each survey question for completeness and reasonableness; and (4) discusses the general methods Treasury used to review the aggregate survey results for reasonableness, including describing examples of the specific analyses for 9 of the 14 survey questions. For example, the report says that Treasury compared aggregate responses to six questions to similar questions related to credit standards and loan demand for commercial and industrial loans that are included in the Federal Reserve July 2012 Senior Loan Officer Opinion Survey and concluded that the SBLF aggregate results were reasonable. Treasury officials said they did not perform this type of analysis for some of the survey questions that were unique to SBLF, such as the question on participants’ use of SBLF funding and questions related to outreach activities, because there was no appropriate data source to compare these aggregate responses. Treasury officials explained that they have identified areas for improvement based on feedback obtained on the 2012 survey and made adjustments to the design and administration of the 2013 survey. Based on some questions from participants during the 2012 survey, Treasury provided additional guidance in responding to certain survey questions. In response to the questions and feedback, Treasury has refined some questions and is considering deleting and adding some questions to the 2013 survey. Treasury officials said they expect to continue to make adjustments to the survey in future years. Although Treasury has taken actions to improve its reporting and analysis of SBLF program performance, Treasury officials told us they recently started to review additional methods for evaluating the effectiveness of the program and its impact on participants’ small business lending, but have not yet provided us with a plan to evaluate the program’s impact or selected an approach for conducting the evaluation. Treasury’s peer- group analysis in the Lending Growth Reports and the annual survey of program participants are valuable steps that contribute to an assessment of SBLF program performance. However, these actions are not sufficient to isolate the impact of SBLF on participants’ lending compared to other factors—a component of performance measurement that we specifically highlighted in our 2011 and 2012 reports—which an impact evaluation could help to achieve. While we did not specifically recommend that Treasury conduct an impact evaluation in 2011, our recommendation that Treasury finalize plans for assessing SBLF performance noted that such plans should include measures that isolate the impact of the program. During our review, Treasury officials told us that they have started considering options to conduct a one-time evaluation of SBLF. For example, in October 2013, Treasury officials told us that since August 2013 they have been working with an existing contractor with expertise in statistical evaluation techniques to help plan and conduct the analysis, and that the contractor has begun to test out various techniques using available data. Treasury officials further explained that the evaluation will use data from September 30, 2013, when the dividend rate for most SBLF participants will become fixed, and the data will be available in November 2013. Treasury officials stated that they intend to complete the analysis and publish the results in fiscal year 2014, but they had not yet provided us with a written plan for the evaluation committing to an approach. According to GAO guidance on program evaluation, when a program is influenced by outside factors, as is the SBLF program, impact evaluations are usually required to assess the net effect of a program (or its actual effectiveness) by comparing the observed outcomes to an estimate of what would have happened in the program’s absence. The guidance further suggests that the design of an impact evaluation often includes using comparison groups and statistical tools and identifying the most important external influences on the desired program outcomes. GAO and other federal agencies have conducted impact evaluations that have employed techniques, such as statistical tools, to isolate the impact of programs, and these could be useful examples for Treasury in designing its impact evaluation of SBLF. All four evaluations included comparison groups in their evaluation design, while three of the four evaluations used a statistical tool known as propensity score matching—just one potential approach—in addition to other methods to consider in designing an impact evaluation of a program. GAO has acknowledged that conducting program evaluations could be costly and time consuming— particularly collecting and ensuring the quality of data—and GAO guidance states that evaluators should aim to select the least burdensome way to obtain the information necessary to address the evaluation question. Treasury already has existing performance information and has already demonstrated that it could readily obtain financial data on SBLF participants and their peers that have been regularly collected by third parties as it does for its peer analysis in the Lending Growth Report. An impact evaluation would help Treasury more effectively inform the public and Congress about how SBLF has affected the participants’ lending compared to other factors, such as economic conditions. The impact evaluation would also provide useful information to Congress in designing future programs that use capital investments or in considering alternatives to SBLF that support small business access to capital. Treasury has continued to develop and refine its approach to assessing the performance of SBLF and measuring the extent to which SBLF participants have increased small business lending. However, these actions do not provide a clear picture of how SBLF has impacted participants’ lending compared to other factors that could explain the increases in lending. Our guidance on program evaluation notes that impact evaluations are usually required to assess the net impact of a program by comparing the program outcomes to an estimate of what would have happened in the program’s absence, often using statistical tools. Treasury officials have told us that they are exploring various approaches to evaluating the program and have a firm under contract that is helping to identify different statistical analyses that could be used. These actions are important initial steps in conducting an impact evaluation. Nevertheless, Treasury has not yet developed a written evaluation plan committed to a specific approach that would show that their evaluation assesses the net impact of SBLF apart from other factors. Given that capital investment programs had been used to address an immediate need during the recent financial crisis and policymakers may face similar changing economic conditions in the future and need to make quick decisions, it is important that Congress and other stakeholders have information about the performance of SBLF and the extent to which it had a meaningful impact on small business lending. An impact evaluation of SBLF would help provide critical information for decision makers, who will likely face another constrained credit environment for small business in the future and will seek options, such as a capital investment approach, or other approaches to promote small business credit, for addressing it. To help ensure that Treasury can provide a useful assessment of SBLF that informs Congress and stakeholders of the effectiveness of this capital investment program in increasing lending, Treasury should follow through in conducting an impact evaluation of the program. In such an evaluation, Treasury should ensure that the analytical approaches identified by its contractor will isolate the role of SBLF from other factors that could affect small business lending to show the net impact of the program. We provided a draft of this report to Treasury, FDIC, the Federal Reserve and OCC for review and comment. Treasury provided written comments, which are reprinted in appendix III. FDIC, Federal Reserve, and OCC did not provide written comments on the draft report. In its written comments, Treasury agreed with our recommendation and stated that it will complete analysis using data from the conclusion of the program’s two-year incentive period, which ended on September 30, 2013. It anticipates publishing the results of its evaluation in fiscal year 2014. Treasury stated that it will continue to explore analytical approaches that isolate SBLF’s role from other factors affecting small business lending. Treasury also noted that the statistical technique of random assignment cannot be used to assess SBLF program impact and therefore unmeasured factors could account for differences in lending growth between participants and comparison groups. We agree that even powerful statistical techniques cannot perfectly replicate random assignment. However, these techniques can improve confidence in whether or not observed results are attributable to SBLF. Treasury and FDIC also provided technical comments on the draft report, which we’ve incorporated in the final report, as appropriate. We are sending copies of this report to the appropriate congressional committees, Treasury, FDIC, the Federal Reserve, and OCC. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Daniel Garcia-Diaz at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to examine (1) the status of the Small Business Lending Fund (SBLF); (2) identify reasons for variation in growth of qualified small business lending at SBLF banks; and (3) actions the Department of the Treasury (Treasury) has taken to evaluate SBLF participant lending patterns. To examine program status, including participants’ lending, and dividend and interest payments, we reviewed Treasury data as of June 30, 2013, and Treasury’s July and October 2013 Lending Growth Reports. We also reviewed Treasury documents related to SBLF redemptions as of July 1, 2013, and interviewed Treasury officials about the reasons participants had left the program. We also interviewed officials from the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Federal Deposit Insurance Corporation (FDIC) on their role in approving SBLF redemptions and on the reasons why banks have left the program. All three of these bank regulators provided us documents describing their procedures for capital redemptions. Sixteen participants had left the program as of July 1, 2013. Six of these participants told Treasury the reasons why they were leaving the program. We contacted the other 10 participants that left the program and did not share their reasons for leaving with Treasury, and we obtained responses from 9. One participant did not respond to our contact attempts. To determine reasons for variation in growth of qualified small business lending at SBLF banks, we used the most current level of qualified small business lending as of June 30, 2013, reported in Treasury’s October 2013 Lending Growth Report, to divide 265 banks into four quartiles based on their level of qualified small business lending and compared the four quartiles to one another using financial and regulatory data. We did not assess banks’ individual financial condition; rather, we looked at the medians of certain indicators to make comparisons between the four quartiles. We analyzed the financial condition of the SBLF banks in each of the four quartiles as of March 31, 2011, and June 30, 2013, to determine if any differences were associated with the initial financial condition of SBLF banks or changes in their financial condition that occurred during the program. To analyze the financial condition of the four quartiles, we accessed Call Report data using SNL Financial—a company that manages a financial database that contains publicly filed regulatory and financial reports. To assess the factors associated with different lending levels, we used data from SNL Financial to analyze asset size, Texas Ratios, liquidity ratios, leverage ratios, wholesale funding, participation in Treasury’s Capital Purchase Program (CPP), and geographic distribution for each of the four quartiles. In addition, we calculated the net SBLF capital received using SNL Financial data and Treasury’s Lending Growth Reports. We also obtained and analyzed CAMELS composite ratings from FDIC as of June 30, 2013, and March 31, 2011, to determine if there was variation across the quartiles. Table 1 shows how the factor we analyzed varied by quartile as of June 30, 2013, and March 31, 2011. We assessed the reliability of the data used for our analyses by, for example, inspecting data for missing observations and outliers and reviewing prior GAO work and updating information as appropriate. We determined that the data collected by Treasury, FDIC, and SNL Financial we reviewed were sufficiently reliable for our purposes of providing a high-level overview of variations of changes in SBLF banks’ qualified small business lending. We also performed a regression analysis to assess the relationship between variation in qualified small business lending levels and indicators of banks’ financial condition. Specifically, we analyzed the relationship between qualified small business lending levels and the Texas Ratio, reliance on wholesale funding, liquidity ratios, leverage ratios, net SBLF capital received, state Gross Domestic Product (GDP), and participant- reported demand for small business loans. We used financial measures for SBLF banks that we have identified in prior reports to demonstrate an institution’s financial health as it relates to asset quality and capital adequacy. We relied on SNL Financial for the Texas Ratio, reliance on wholesale funding, liquidity ratios, leverage ratios, and net SBLF capital received. We measured GDP in a state using data from the U.S. Department of Commerce’s Bureau of Economic Analysis. Finally, we measured participant-reported demand for small business loans by using responses from SBLF respondents to two questions in Treasury’s first annual SBLF survey and incorporated these responses through program identifiers into our analysis. We assessed the reliability of data from each of these sources by, for example, inspecting data for missing observations and outliers, reviewing prior GAO work, reviewing any changes in survey methodologies, and updating information as appropriate, and found the data to be sufficiently reliable for our purposes. See appendix II for additional information on the regression model. To describe the reasons why some SBLF banks were more or less successful in increasing lending to small businesses, we selected and attempted to contact a nonprobability, judgmental sample of 12 banks based on geographic distribution and participation in CPP. Specifically, using the quartile breakdown we selected four banks from the first and fourth quartiles (banks with the highest and lowest levels of qualified small business lending) and two banks each from the second and third quartiles. We first selected banks from each of six U.S. regions (Midwest, West, Southeast, Southwest, Mid-Atlantic, and Northeast) and then narrowed the selection to obtain a mix of prior CPP participants. We obtained responses from 10 of them. The results of these interviews cannot be generalized to all SBLF banks but provide insights to reasons for variation in lending. To describe trends in small business credit markets and how these trends may have affected a bank’s ability to lend, we used a number of survey indicators to describe market conditions as of June 2013 and before the implementation of SBLF. These indicators included data from a survey conducted by the National Federation of Independent Business on whether members’ borrowing needs are being satisfied; a survey by Wells Fargo addressing banks’ ease or difficulty in obtaining credit; and data from the Federal Reserve Senior Loan Officer Opinion Survey on the demand for credit across small firms and whether lending standards have tightened or eased. We also relied on the results report of Treasury’s first SBLF annual survey. We also interviewed Treasury officials responsible for the program and contacted representatives of the National Federation of Independent Business, who regularly survey small businesses, and the American Bankers Association and Independent Community Bankers Association, who represent the interests of community banks. To determine the reliability of these data sources, we interviewed company representatives as appropriate to learn about their data collection methods and any changes to their controls. We also reviewed previous GAO work and survey methodologies to determine if there were any changes made that would affect the data’s reliability. Based on our analysis we determined that, while the survey indicators were not independently critical to our findings, they were sufficiently reliable together to document patterns in the small business credit markets and how these may affect a bank’s ability to lend. To determine the reliability of the Treasury survey data we used, we interviewed Treasury officials on their procedures for reviewing the survey responses, reviewed the nonresponse rate (none), and checked for consistency. Based on these steps, we determined that the data collected by Treasury were sufficiently reliable for the purpose of reporting on credit standards to SBLF banks. To determine actions Treasury has taken to evaluate SBLF participant lending patterns, we reviewed multiple Treasury documents, including the SBLF Lending Growth Reports issued in October 2012, and January, April, July, and October 2013, the first SBLF annual lending survey instrument, and the June 2013 results report for the first annual lending survey, as well as relevant supporting documentation of these reports. We also interviewed Treasury officials to understand the agency’s efforts to assess program performance of SBLF. We also reviewed our 2011 and 2012 reports on SBLF to document Treasury’s past efforts in assessing SBLF performance. In addition, we reviewed GAO guidance on program evaluation and past GAO work on federal government performance management and compared Treasury’s actions to this guidance and work. Further, we reviewed impact evaluations conducted by other federal agencies and GAO to enhance our understanding and provide examples of impact analysis and statistical techniques. We conducted this performance audit from March 2013 to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To help determine factors that may have influenced Small Business Lending Fund (SBLF) banks’ ability to successfully translate SBLF equity into new lending we estimated a number of econometric models. We utilized information on SBLF banks’ initial financial condition, Capital Purchase Program (CPP) conversion status, net capital received (netting out any equity used to repay Treasury CPP capital), state-level economic growth, and participant-reported small business loan demand. We first estimated a model based solely on the initial financial conditions as of March 31, 2011, of SBLF banks, using measures of equity, asset liquidity, reliance on wholesale funding, profitability, and troubled loans. All models were estimated via linear least squares with White heteroskedasticity-consistent standard errors. We used two alternative measures of initial equity: the leverage ratio and the tier 1 risk-based capital ratio. We found that the leverage ratio was a somewhat stronger predictor (p-value =0.0009 vs. p-value = 0.0498) of growth in qualified small business lending, so we used that measure of equity in subsequent models. All other measures of financial conditions were statistically significant predictors of qualified small business lending growth with the exception of return on assets (a measure of profitability), which we excluded from subsequent models. Next we added CPP conversion status to our model with financial conditions. CPP conversion status was a statistically significant predictor of qualified small business lending growth; however, once we add net capital to the model, CPP status was no longer statistically significant, while net capital was statistically significant (p-value =0.02). Because CPP conversion status and net capital are strongly negatively correlated, we excluded CPP status from subsequent models to avoid multicollinearity and associated variance inflation. After excluding CPP status from the model, the statistical significance of the net capital coefficient increased dramatically (p-value < 0.0001). These results suggest that net capital is driving differences between SBLF banks that paid off CPP capital with SBLF capital and other SBLF banks that did not, rather than an unobserved factor associated with CPP status. To attempt to capture local economic conditions, we added state Gross Domestic Product (GDP) growth to the model with financial conditions and net capital. State GDP growth was not a statistically significant (p- value > 0.23) predictor of qualified small business lending growth. However, when we added a self-reported measure of demand for small business loans from Treasury’s survey of SBLF participants to the model as an alternative, we found that this measure was a statistically significant predictor. These results suggest that state-level economic growth is not a precise measure of demand for loans at SBLF banks. We present the regression results of this final model in table 2 below. Because demand was self-reported, this variable could not be independently corroborated. Participants with low growth in qualified small business lending may have had an incentive to report low demand to avoid the perception that they had underperformed. This tendency would cause our model to overstate the impact of local demand. The economic significance (magnitude) of some of these factors is summarized below: For every 14 percentage point increase in troubled loans (roughly one standard deviation) relative to capital and reserves, qualified small business lending growth decreased by 13 percentage points. For every 2 percentage points (roughly one standard deviation) more capital relative to assets (less leverage) prior to receiving SBLF capital, participants increased qualified small business lending by an additional 13 percentage points. A participant that received net SBLF capital of 1.9 percent (the mean) of assets relative to a participant that received no net capital (a potential outcome for participants that converted through CPP) increased qualified small business lending by an additional 18 percentage points. Participants that reported stronger demand for small business loans relative to those that reported weaker demand for small business loans increased qualified small business lending by an additional 24 percentage points. Daniel Garcia-Diaz, (202)-512-8678, garciadiazd@gao.gov. In addition to the individual named above, Kay Kuhlman (Assistant Director), Bethany Benitez, Anna Chung, Pamela Davidson, Patrick Dynes, Michael Hoffman, Lauren Nunnally, Jennifer Schwartz, and Jena Sinkfield made key contributions to this report.
The Small Business Jobs Act of 2010 aimed to stimulate job growth by, among other things, establishing the SBLF program within Treasury. SBLF was authorized to make up to $30 billion in capital investments to encourage banks and community development loan funds with assets of less than $10 billion to increase their small business lending. The act generally defined "small business lending" as loans not more than $10 million originally. Under the act, GAO is mandated to conduct an audit of SBLF annually. GAO's first and second reports were on the program's implementation and performance reporting and made recommendations on management oversight, program evaluation, and performance reporting. This third report examines (1) the growth in qualified small business lending and reasons for variations in the growth at SBLF banks and (2) actions Treasury has taken to evaluate participant lending patterns. GAO analyzed the most recent available performance and financial information on SBLF participants; reviewed government and private sector surveys on small business credit conditions; and interviewed officials from Treasury and representatives from SBLF participants. According to the U.S. Department of the Treasury (Treasury), as of June 30, 2013, Small Business Lending Fund (SBLF) participants had increased their qualified small business lending by $10.4 billion over their aggregate 2010 baseline of $36.5 billion. However, SBLF participants varied greatly in the extent to which they had increased small business lending. GAO analysis for the quarter ending June 30, 2013, showed that the median SBLF bank in the top (first) quartile had over a 100 percent increase over the baseline, compared to a 9.1 percent increase for the median SBLF bank in the bottom (fourth) quartile. Several factors GAO analyzed could help explain this variation. For example, the median bank in the bottom quartile had more troubled loans than banks in the top quartile, which could affect a bank's lending capacity. Also, a majority of banks in the bottom lending quartile used much of their SBLF funds to repay investments received from Treasury's Capital Purchase Program, leaving them with a much smaller net increase in available capital. Several banks GAO interviewed said that they have seen increased demand for credit as the economy improved. In addition, publicly reported surveys indicate that credit conditions have improved, but some small businesses continue to face challenges securing credit. Treasury has taken steps to assess SBLF participants' lending patterns, including conducting a peer-group analysis and collecting performance data through an annual survey of SBLF participants. However, these steps are not sufficient to isolate the net impact of SBLF on participants' lending. Treasury officials said that they are exploring evaluation approaches and have a firm under contract to help identify statistical analyses that could be used, but have not provided documentation of its evaluation plan committing to an approach. GAO guidance on program evaluation suggests that impact evaluations are usually required to assess the net impact of a program by comparing the observed outcomes to an estimate of what would have happened in the program's absence. By conducting an evaluation that includes methods to assess the net impact of SBLF, Treasury could more effectively inform the public and Congress on how SBLF has affected the participants' lending compared to other factors, such as economic conditions, and could better assist Congress in making future decisions about similar capital investment programs or alternative programs that support small business access to capital. Treasury should follow through in conducting an impact evaluation that includes methods to isolate the effect of SBLF from other factors on participants' small business lending. In written comments on a draft of this report, Treasury agreed to implement the recommendation.
Mercury is a naturally occurring toxic metallic substance that exists as a liquid or vapor in its elemental form and can be a solid or liquid in its compound form. Elemental mercury is used in producing chlorine liquid and caustic soda, in extracting gold from ore or materials that contain gold, and in thermometers, barometers, and electrical switches. Silver-colored dental fillings (known as dental amalgam) typically contain about 50 percent metallic mercury. Mercury forms inorganic compounds when combined with elements such as chlorine, sulfur, or oxygen. Inorganic mercury compounds are used in fungicides, skin-lightening creams, topical antiseptic or disinfectant agents, antibacterials, preservatives in some prescription and over-the-counter medicines, coloring paints, and tattoo dyes. In combination with carbon, mercury forms organic compounds, the most common of which is methylmercury, which can build up in certain edible freshwater and saltwater fish and marine mammals. In recent years, mercury use has declined as the availability of nonmercury-based materials has been developed. For example, the large lamps that light parking lots used to be made with mercury, but are increasingly being made without it. Also, the Mercury-Containing and Rechargeable Battery Management Act of 1996 severely restricted the mercury content in batteries sold after the act’s enactment date of May 13, 1996. Today, the predominant uses of mercury are for the production of chlorine-related products, the amalgam used in dental fillings, and wiring devices that carry electrical current. As figure 1 shows, mercury use in the United States generally declined between 1980 and 1997, according to the U.S. Geological Survey, which compiled those data until 1997. Debris contaminated with mercury can come from various sources—often from a cleanup effort (such as a mercury spill) or demolition of a mercury-contaminated building (such as a laboratory). It can also include structural steel, glass, wooden pallets, cloth, and ruptured containers and devices. When debris contains hazardous amounts of mercury or other hazardous wastes, the hazardous waste debris must be treated to address each of the hazardous wastes. The debris definition excludes the following materials: any material for which a specific treatment standard is provided in 40 C.F.R. Part 268, Subpart D (namely lead acid batteries, cadmium batteries, and radioactive lead solids); process residuals such as smelter slag and residues from the treatment of waste, wastewater, sludges, or air emission residues; and intact containers of hazardous waste that are not ruptured and that retain at least 75 percent of their original volume. A mixture of debris and other material (such as soil or sludge) is subject to the hazardous waste debris regulations if the mixture is comprised primarily of debris, by volume, based on visual inspection. Figure 2 provides a general description of categories of waste that EPA typically classifies as debris and that could be contaminated with mercury. Mercury-contaminated debris (such as bricks, pipes, ruptured metal drums, or large chunks of concrete) may either be treated according to (a) the mercury-specific standards described above (primarily including retorting for high-mercury containing waste), or (b) encapsulated or stabilized, regardless of the mercury concentration level. If managed using the mercury-specific standards, the waste or residue from the retorting process must have their toxicity reduced to specified numerical levels before it can be land disposed. Waste managed according to the alternative treatment standards for hazardous debris does not generally have to be tested before it is land disposed because, according to EPA, obtaining a representative sample is often impractical. In addition, the leach test, which requires grinding as part of the test procedure, may not be appropriate for certain debris treatment technologies, such as encapsulation, since the grinding step would defeat the protective mechanism of the treatment technology. According to EPA officials, the agency encourages businesses that generate mercury-contaminated debris to remove the mercury contaminated material from the debris—a process referred to as source separation. Also, according to EPA and industry, there are some debris items (such as debris contaminated with mixtures of mercury and organic chemicals) that remain difficult to retort; as such, the debris regulations are needed to ensure that such debris is treated and disposed of properly. Table 1 summarizes EPA’s debris regulations and definitions of debris. In 1999, EPA issued an advance notice of proposed rulemaking to conduct a comprehensive review of the RCRA hazardous waste treatment regulations for mercury-containing wastes. EPA had identified mercury as one of the more persistent toxic chemicals regulated under RCRA. EPA stated that potential revisions, if any, would be based on the comments that it received and data obtained from ongoing studies and other sources. Among other issues, EPA requested comments on whether to (1) allow alternative treatment options to retorting for high mercury-containing waste and (2) require retorting for high mercury-containing waste that meets the definition of debris. With respect to allowing alternative treatment options to retorting for high mercury-containing waste, EPA made available data to the public in 2003 on two studies that assessed the feasibility of land disposal for elemental mercury and for difficult-to-treat high mercury-containing waste that had been treated by stabilization. From these studies, EPA concluded that treatment by stabilization may not result in a waste that is stable under some hazardous waste landfill conditions. According to EPA officials, the agency was concerned about using stabilization for elemental mercury in certain landfill conditions where leaching was more likely to occur. EPA did not change the existing hazardous waste regulations for mercury-containing waste. With respect to requiring that high mercury-containing waste that meets the definition of debris be retorted, all of the comments that EPA received, except one, expressed the view that EPA should not modify the alternative treatment standards for debris to require the retorting of debris with high concentration levels of mercury because debris is not always amenable to retorting and because the alternative treatment standards for debris provide needed flexibility to manage difficult-to-treat wastes. EPA did not modify the debris regulations. In 2003, EPA collaborated with the Association of State and Territorial Solid Waste Management Officials and the Northeast Waste Management Officials’ Association to discuss potential mismanagement of mercury-contaminated debris. Based on those discussions, EPA issued a debris memorandum in October 2003 to state waste managers that provided guidance for managing mercury-contaminated debris. In that guidance memorandum, EPA sought to clarify the types of waste that are eligible for treatment under the alternative treatment standards for debris, provide information on the improved capabilities of mercury “retorters” to accept and recover mercury from debris-like waste, and describe how to meet the performance standards for several debris treatment technologies. In a May 2004 follow-up letter, the Administrator of EPA stated that EPA had not found any evidence that there is a significant environmental problem associated with the management of mercury-contaminated debris under EPA’s current rules. Figure 3 shows that mercury-containing waste comes from industrial and nonindustrial sources. EPA requires the collection of data on hazardous waste activities from industrial sources, but not from nonindustrial sources. Nonindustrial sources generate mercury-containing waste, such as household thermometers and dental amalgam, which may, if not recycled, be generally disposed of in municipal solid waste landfills. Under RCRA, hazardous waste landfills and businesses that retort mercury-contaminated debris must meet federal standards designed to protect public health and the environment. Among other standards, hazardous waste landfills must meet minimum technological requirements, including double composite liners, a leachate collection and removal system, and a leak detection system, as well as provide for groundwater monitoring. In addition, hazardous waste landfills may not operate without a RCRA permit. Landfills must also meet other more stringent state requirements, if any, which often include on-site state inspectors and additional groundwater monitoring wells. According to EPA’s RCRAInfo data, there are 19 commercial hazardous waste landfills in the United States, most of which accept mercury-containing waste. Facilities that retort mercury-contaminated debris may only retort wastes below specified organic concentration limits or above specified heating values. In addition, the facilities must comply with waste sampling and analysis requirements. As of 2005, four companies reported that they operate seven facilities that retort mercury-contaminated debris. Figure 4 show the locations of the 19 commercial hazardous waste landfills and seven retorting facilities in the United States. Every 2 years, EPA compiles and summarizes data on the amount of hazardous waste generated, treated, and disposed of. For this biennial report, EPA requires businesses to submit information to the states on each waste generated, treated, and/or disposed of. Among other things, businesses report on the type of hazardous constituent(s) present in the waste, the process (such as chlorine production) or activity (such as demolition) that generated the hazardous waste, and the treatment or disposal method used in managing the hazardous waste. EPA also requests, but does not require, that businesses submit certain additional information about the waste, including the portion of the waste that is debris. EPA maintains the data in its RCRAInfo database. The states conduct data reliability assessments (such as checking for missing values, out-of-range values in each field, and inconsistencies and errors in the data) before entering the information into RCRAInfo; EPA also conducts data reliability assessments of RCRAInfo data. EPA uses its RCRAInfo database, which began in 1999, to maintain data on hazardous waste submitted by states. According to EPA officials, RCRAInfo was designed specifically to track national trends of hazardous waste generation, treatment, and disposal. In 1991, EPA began producing its biennial reports, and it began collecting data on debris as a separate category of physical form in 2001. EPA’s most recent biennial hazardous waste report, for the 2003 reporting cycle, was released in April 2005. According to RCRAInfo data, in 2003, mercury-contaminated debris constituted about 12,000 metric tons of the mercury-containing waste; about 0.4 percent of all mercury-containing waste and about 0.03 percent of all hazardous waste in 2003. Table 2 summarizes RCRAInfo’s data on the total quantities of the hazardous waste, mercury-containing waste, and mercury-contaminated debris treated and disposed of in 2001 and 2003. Appendix II provides more information on mercury-contaminated debris, such as the types of businesses and industry processes that generated the debris and the total quantity of debris that was generated, treated, and disposed of in each state. RCRAInfo data on mercury-contaminated debris may be incomplete. EPA does not require businesses to report to their states on the physical form of the waste, including the portion of their mercury-containing waste that they treated and disposed of as debris. Since reporting the physical form of the waste is optional, the portion of a state’s mercury-containing waste that was treated and disposed of as debris is not known for businesses that did not submit such information. Our analysis of the 2003 RCRAInfo data showed that businesses did not report the optional information on the physical form of the waste in about 9 percent of the instances in which mercury-containing waste was treated and disposed of. These instances accounted for less than 1 percent of the total quantity of mercury-containing waste (10,011 metric tons of the 3,145,726 metric tons of mercury-containing waste). If businesses did not report the optional debris information to states, then the states could not report it to EPA. Businesses that did not submit optional information may have managed a portion of the waste as debris or they may have managed none of this waste as debris. In 2001, the first year businesses reported on debris, RCRAInfo data showed that businesses did not submit the optional information on the physical form of a waste (including debris) in about 14 percent of instances when they treated and disposed of mercury-containing waste. Specifically, these instances accounted for about 4.5 percent of the total quantity of mercury-containing waste treated and disposed of (about 51,179 metric tons of the 1,124,900 metric tons of mercury-containing waste). Furthermore, EPA’s biennially collected data on debris may be reported incorrectly. The directions EPA gave states and businesses for reporting data was ambiguous. EPA had a “debris” category in the Hazardous Waste Report instructions, but it did not provide a complete list of debris items. For example, ruptured metal drums are typically considered debris, but are not included in the list of items in the debris category description and there is a separate “metal drum” category. Thus, if businesses were reporting ruptured metal drums, they might report ruptured drums in the debris category or in the metal drums category. EPA told us that it intended businesses to use the debris category to report all waste identified as hazardous waste debris. Businesses that generate, treat, and dispose of mercury-containing waste are unclear about the types of mercury-containing waste items that can be treated and disposed of as debris. In response to our survey, officials in 21 states and 6 hazardous waste landfill operators identified one or more items as debris that do not typically meet EPA’s debris definition. For example, state officials frequently identified intact fluorescent light bulbs, soil, and intact containers (other than batteries), that include intact devices such as regulators and thermometers, which may contain high levels of mercury, as being subject to the alternative treatment standards for debris. Intact containers (which are excluded from the definition of debris) and the other items (which do not fit the definition of debris) must be treated in accordance with RCRA’s mercury-specific hazardous waste treatment standards. In addition, although EPA’s definition of debris states that “debris means solid material exceeding a 60 millimeter particle size,” officials in 3 states classified ruptured devices and batteries with particle size less than 60 millimeters as debris. These ruptured mercury-containing items may be high mercury-containing waste, which would require retorting. However, if these items were managed according to the alternative treatment standards for debris, they could be encapsulated or stabilized and then disposed of in a hazardous waste landfill. EPA prohibits this treatment and disposal method for high mercury-containing waste, which must generally be retorted; the residual that remains must meet a leach test standard before it can be land disposed. Figure 5 lists the mercury-containing wastes that would typically not be eligible for treatment and disposal using the alternative treatment standards for debris. Table 3 summarizes the views of the state officials we surveyed on whether they would classify certain types of mercury-containing wastes as debris. The wastes listed in table 3 would not typically meet EPA’s definition of debris. However, as the table shows, officials in several states identified nondebris items as being debris, and officials in 21 states reported that they would treat and dispose of at least one item listed in the table as debris although the item would not typically meet EPA’s definition of debris. Appendix III summarizes the state officials’ responses to our survey on mercury-containing waste treatment and disposal practices. In addition to these nondebris items listed in table 3, our survey also asked about three debris items: ruptured drums, ruptured batteries with particle size exceeding 60 millimeters, and other ruptured devices with particle size exceeding 60 millimeters. According to our survey results, only one state’s official considered as debris these three items that EPA would also typically consider to be debris. Officials in 9 other states reported that they classify all of the items on our list as hazardous waste and did not classify any of these items as debris. For example, ruptured drums and ruptured devices were wastes that these states typically classified as hazardous waste, but which EPA classifies as debris. Four of the 14 commercial hazardous waste landfill operators that responded to our survey identified intact fluorescent light bulbs as debris and 3 of the 14 identified intact devices as debris. These items would generally be considered intact containers and therefore be specifically excluded from EPA’s debris definition. The landfill operators responded correctly about particle size requirements for debris. None of the landfill operators identified intact drums as debris. Table 4 summarizes the landfill operators’ views on whether they would classify certain types of mercury-containing wastes as debris. The wastes listed in table 4 would not typically meet EPA’s definition of debris. However, as the table shows, some landfill operators identified nondebris items as being debris, and 6 landfill operators reported that they would treat and dispose of at least one item listed in the table as debris although the item would not typically meet EPA’s definition of debris. Appendix IV summarizes the commercial hazardous waste landfill operators’ responses to our survey on mercury-containing waste treatment and disposal practices. In addition to these nondebris items listed in table 4, our survey also asked about three debris items: ruptured drums, ruptured batteries with particle size exceeding 60 millimeters, and other ruptured devices with particle size exceeding 60 millimeters. According to our survey results, only one landfill considered as debris these three items that EPA would also typically consider to be debris. Furthermore, while EPA allows certain mercury-containing waste to be managed as debris, the commercial hazardous waste landfill operators were sometimes stricter in what they allowed. Specifically, two landfill operators do not allow any mercury-containing waste that we listed in our survey to be managed as debris; two other landfill operators only allow one mercury item (ruptured drums or ruptured batteries with particle size exceeding 60 millimeters) to be treated and disposed of according to the alternative treatment standards for debris; and two landfill operators send debris with high levels of mercury (i.e., greater than 260 milligrams per kilogram) to retorting facilities, including one who reported receiving mercury-containing waste inappropriately labeled as debris, which they sent to a retorting facility for treatment. While our survey results show that officials in many states and most landfill operators have a good understanding of the debris rule, there are some instances in which states and landfill operators identified items as debris that would not typically meet EPA’s debris definition. Since the 2001 Hazardous Waste Report cycle, there is a separate category called “debris” and businesses that determine that their waste is “debris” will naturally use the debris category to report their debris data. However, as discussed earlier, there is confusion about the debris category and more wastes have been reported as debris than EPA considers to be debris. With respect to treatment methods that have been used for debris, EPA’s RCRAInfo data showed considerable differences between the 2001 and 2003 cycles. For this analysis, we used the data reported for debris contaminated only with mercury. We did not use data for debris that contained mercury and other hazardous constituents because the method used to treat the mercury was not readily discernable from the RCRAInfo data. As shown in table 5, in 2001, businesses that generated mercury-only contaminated debris treated most of the debris by metals recovery such as retorting; in 2003, most of the debris was treated by encapsulation or stabilization before land disposal. Most of that 2003 debris that was encapsulated or stabilized before land disposal came from one facility. EPA officials were surprised to learn from us that most debris was not coming from hazardous material spill sites or cleanup sites that typically have on-site state or federal oversight in treatment and disposal decisions. According to our analysis of RCRAInfo’s 2003 data, debris was generated as follows: about 25 percent from ongoing routine processes, such as replacing pipes at a chlorine plant; about 41 percent from intermittent events, such as demolishing a about 17 percent from EPA or state-managed sites, such as hazardous material spills or cleanup efforts; and about 16 percent from pollution control and waste management process residuals. Although businesses determine how to manage the majority of mercury-contaminated debris, EPA officials told us they believe that treatment and disposal decisions were made appropriately because of the multiple oversight mechanisms in place. They specifically cited the hazardous waste manifest system and the EPA and state inspection and enforcement programs, discussed below. In addition, they noted that in order to comply with RCRA, hazardous waste landfill operators must, among other things, obtain a RCRA permit and develop a waste analysis plan that documents the procedures the operator will follow to ensure the facility only handles waste it is permitted to and to ensure proper waste disposal. They also noted that hazardous waste landfills must meet minimum technological requirements, including double composite liners, a leachate collection and removal system, and a leak detection system. EPA and the states oversee compliance with treatment and disposal requirements for mercury-contaminated debris as part of their efforts to monitor multiple types of hazardous waste. We identified four mechanisms that monitor compliance with hazardous waste regulations, including the debris regulations. First, to ensure hazardous waste is properly managed, EPA established a tracking system to monitor hazardous waste from its generation to its disposal. The critical component of this system is the uniform hazardous waste manifest, which is a form prepared by all businesses that generate, transport, or offer for transport, hazardous waste for off-site treatment, recycling, storage, or disposal. The manifest contains information on the type and quantity of the waste being transported, instructions for handling the waste, and signature lines for all parties involved in the disposal process. Each party that handles the waste signs the manifest and retains a copy for themselves. Once the waste reaches its destination, the receiving facility returns a signed copy of the manifest to the business that generated the waste, confirming that the waste has been received by the designated facility. Each of these documents must generally be retained for 3 years. Second, EPA requires businesses that generate, treat, and dispose of hazardous waste to retain certain other records for 3 years. Businesses that generate hazardous waste must send a notification with the initial shipment of every waste. The information that the notification must include varies according to the status of the waste. Facilities that treat hazardous waste are required to send similar notifications along with shipment of the treated wastes to facilities that dispose of hazardous waste. A certification normally accompanies this notification stating that the waste meets its treatment standards and may be land disposed. Facilities that dispose of hazardous waste are the final link in the waste management chain. As a result, these facilities have to test the waste residue that they receive to ensure that it meets the treatment standards. Third, EPA and states’ hazardous waste enforcement programs periodically monitor compliance with EPA regulations, primarily through oversight inspections of facilities and enforcement actions (such as fines and imprisonment) to correct violations. As part of its oversight, EPA provides compliance assistance and incentive programs to encourage businesses to “self-police” and voluntarily discover, disclose, and correct violations of RCRA requirements. In response to our survey, 29 states reported violations related to the treatment and disposal of mercury-containing waste during the past 5 years. Generally, the states discovered the violations during inspections, and most of the violations concerned the treatment and disposal of mercury-containing lamps, such as fluorescent light bulbs. We confirmed in our followup conversations with these states that very few of their reported violations were related to the treatment and disposal of mercury-contaminated debris. In one instance, however, a state agency fined a university $18,000 for hazardous waste violations, such as inappropriately disposing of mercury-contaminated debris. The university had failed to sample a building for mercury contamination before renovating it, and mercury was discovered in several areas after the demolition debris from the renovation had been removed. Lastly, EPA and many states provide citizens with telephone hotlines, Web sites, and forms to file complaints or report potential hazardous waste violations. Some states that responded to our survey stated that some mercury-containing waste violations were reported by citizens’ tips. We recognize that EPA developed the debris regulations to manage waste that could not be readily addressed with the existing RCRA regulations. With respect to mercury-contaminated debris, EPA has assessed the potential environmental risks and determined that the debris standards can be used for mercury-containing waste that meets the debris definition. EPA also provided a guidance memorandum to states intended to clarify the types of wastes that can be managed using the debris standards. However, our analysis showed that states and industry in some instances considered items to be debris that typically do not meet EPA’s definition of debris. As a result, EPA’s information on debris may not be entirely accurate. We believe EPA would have better information on debris in RCRAInfo if EPA would clarify and provide a better description of the types of waste that should and should not be reported in the debris category in the instructions for submitting biennial data. In addition, we recognize that mercury-contaminated debris represents a very small portion of the hazardous waste that is treated and disposed of annually in the United States. However, we are concerned that officials in several states and operators of some commercial hazardous waste landfills that responded to our survey reported that in some instances they would consider items to be debris that typically do not meet EPA’s definition of debris. EPA’s debris definition specifically excludes some of these items. Thus, some waste items might be disposed of inappropriately and in a more risky manner. EPA did not consider the impact of states and industry misunderstanding the debris standards when it examined the use of the debris regulations for high mercury-containing waste. Since there is apparent confusion about what constitutes debris, we believe that EPA should begin an outreach effort to communicate and clarify the types of mercury-containing hazardous wastes that can be treated and disposed of using the debris treatment standards. To better ensure that the businesses that generate, treat, and dispose of hazardous waste are properly managing and reducing the risk of their mercury-containing waste, we are making the following two recommendations to the Administrator of the Environmental Protection Agency: clarify and better describe the types of waste that can and cannot be reported under the “debris” reporting category and include the definition of debris in the instructions for the Hazardous Waste Report and conduct further outreach to communicate to states and hazardous waste landfills the types of mercury-containing wastes that can be treated and disposed of according to the alternative treatment standards for debris. We provided EPA with a draft of this report for review and comment. In oral comments, EPA stated that it agreed with our recommendations. EPA also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Administrator of the Environmental Protection Agency and other interested officials. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or at stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix V. The objectives of our review were to determine (1) the mechanisms that the Environmental Protection Agency (EPA) uses to track the treatment and disposal of mercury-contaminated debris and the quantity of mercury- contaminated debris that is disposed of, (2) the extent to which EPA, states, and industry share a common understanding of the types of mercury-containing wastes that can be treated and disposed of as debris, and (3) EPA and state controls that are in place to monitor compliance with EPA’s treatment and disposal requirements for mercury-contaminated debris. For the purpose of this report, we used the following terms: businesses that generate mercury-containing waste—includes private companies and government and university facilities and laboratories; mercury-containing waste—includes hazardous waste that contained any of the six mercury waste codes: (1) D009—mercury; (2) K071— brine purification muds from the mercury cell process in chlorine, in which separately prepurified brine is not used; (3) K106—wastewater treatment sludge from the mercury cell process in chlorine production; (4) P065—mercury fulminate; (5) P092—phenylmercury acetate; and/or (6) U151—mercury; and mercury-contaminated debris—includes mercury-containing waste and was reported under EPA’s debris reporting category. To determine the mechanisms that are used to track the treatment and disposal of mercury-contaminated debris, we reviewed EPA documents and reports (such as EPA’s biennial hazardous waste reports) and EPA regulations and policies. We also interviewed officials at EPA, Ohio’s Environmental Protection Agency, the Association of State and Territorial Solid Waste Management Officials, the Environmental Council of the States, the Environmental Technology Council, and the Northeast Waste Management Officials’ Association. In addition, we met with officials from the departments of Defense and of Energy to discuss the types of mercury- contaminated debris that they generate. To identify the quantity of the hazardous waste that is disposed of as mercury-contaminated debris, we obtained RCRAInfo hazardous waste data for the 2001 and 2003 reporting cycles. We assessed the reliability of the data and found that they were sufficiently reliable for our use. We also developed a survey to gather information from the 50 states and the District of Columbia on, among other things, their treatment and disposal practices for mercury- contaminated debris and whether they collected data more frequently than required by EPA’s biennial hazardous waste reports. To determine the extent to which EPA, states, and industry share a common understanding of the types of mercury-containing wastes that can be treated and disposed of as hazardous debris, we used two surveys to gather information on, among other things, states’ and hazardous waste landfills’ current practices for treating and disposing of certain mercury- containing wastes using EPA’s alternative treatment standards for debris. We surveyed the 50 states and the District of Columbia. We obtained a list of state hazardous waste officials from the Association of State and Territorial Solid Waste Management Officials and the Environmental Council of the States. We confirmed with each state official, that he or she was the appropriate state official to complete our survey on mercury- contaminated debris or obtained the name of another official and confirmed with that official. In addition, we surveyed businesses that treat and dispose of mercury-containing waste. We included in this survey, the 19 U.S. commercial hazardous waste landfills identified by EPA. We obtained the list of hazardous waste landfills by using 2001 and 2003 information from EPA’s RCRAInfo Permit Module. We confirmed with each landfill operator that he or she was the appropriate individual to complete our survey on mercury-contaminated debris. We did not survey federal and private facilities that could also treat and dispose of this waste and facilities that primarily retort mercury-containing waste, such as fluorescent light bulbs. Before distributing the surveys, we conducted pretests of the questions with officials who would be responding to the surveys in order to ensure the validity of the survey questions. For the state survey, we conducted pretests with seven states (Maryland, Nevada, Ohio, Oklahoma, New Hampshire, Montana, and Delaware) located in six EPA regions. For the landfill survey, we conducted pretests with commercial hazardous waste landfill operators in Texas and New York. As part of each pretest, we interviewed the respondents after they had filled out a survey to ensure that the questions were clear, unambiguous, and unbiased and that completing the survey would not place an undue burden on the officials completing it. On the basis of the feedback from the pretests, we modified the questions, as appropriate. For the state survey, we received responses from 48 states and the District of Columbia. We did not receive responses from Alaska and Iowa because EPA has not provided these states with the authority to implement RCRA requirements, and EPA has the lead for all RCRA activities in these states. We received responses from 14 hazardous waste landfill operators in 7 companies that manage 15 of the 19 landfills. Two companies that manage four landfills chose not to participate in our survey. We also interviewed officials at EPA, Ohio’s Environmental Protection Agency, the Association of State and Territorial Solid Waste Management Officials, the Environmental Technology Council, the Northeast Waste Management Officials’ Association, the Chlorine Institute, and the four companies that retort mercury-contaminated debris. Our interviews included questions about the types of mercury-containing wastes that they classify as mercury-contaminated debris. To determine the controls that are in place to monitor compliance with EPA’s treatment and disposal requirements for mercury-contaminated debris, we conducted follow-up interviews with officials in 29 states (Alabama, Arizona, Arkansas, California, Connecticut, Delaware, Florida, Hawaii, Idaho, Illinois, Indiana, Louisiana, Maine, Minnesota, Mississippi, Missouri, Nebraska, Nevada, New Hampshire, North Carolina, New York, Ohio, Pennsylvania, Rhode Island, Texas, Vermont, Virginia, Washington, and Wisconsin) that had identified violations in the treatment and disposal of mercury-containing waste. Our interviews included questions about the type of mercury-containing waste involved in the violations that they reported, the type of business or industry that committed the violation, the way the violations were uncovered, and the type of enforcement actions taken. We also conducted Internet searches on mercury-containing waste violations and reviewed EPA’s requirements and policies for treating and disposing of mercury-contaminated debris and EPA documents related to the development of the debris regulations, such as Federal Register notices. We discussed the effectiveness of these requirements and policies for protecting human health and the environment with officials at EPA, representatives from hazardous waste landfills, Ohio’s Environmental Protection Agency, the Association of State and Territorial Solid Waste Management Officials, the Environmental Technology Council, the Northeast Waste Management Officials’ Association, the Chlorine Institute, and the four companies that retort mercury-contaminated debris. We performed our work between March 2005 and November 2005, in accordance with generally accepted government auditing standards, which included an assessment of data reliability and internal controls. This appendix provides additional information from RCRAInfo on activities related to the generation, treatment, and disposal of hazardous debris contaminated with mercury (mercury-contaminated debris) in the United States during 2001 and 2003. In the first section, we discuss activities related to the generation of mercury-contaminated debris, such as the states where debris was generated and the types of industries that generated the debris. In the second section, we discuss treatment and disposal activities related to mercury-contaminated debris, such as the quantity of mercury-contaminated debris treated and disposed of in each state. According to RCRAInfo data, many states generate mercury-contaminated debris. In 2001, 43 states and the District of Columbia generated 8,028 metric tons of mercury-contaminated debris. Nebraska, Ohio and West Virginia generated about 59 percent of the total (about 4,771 metric tons). Figure 6 shows the quantity of mercury-contaminated debris generated by state in 2001. In 2003, according to RCRAInfo data, 45 states and the District of Columbia reported generating 3,966 metric tons of mercury-contaminated debris. Kentucky, Louisiana, Ohio, Arizona, and New York generated about 52 percent of the total (about 2,076 metric tons). Figure 7 shows the quantity of mercury-contaminated debris generated by state in 2003. According to RCRAInfo data, in 2001, about 95 percent of the total quantity of mercury-contaminated debris (about 7,589 metric tons) was generated by industries representing remediation and waste management services, manufacturing (such as the textile and metals industries), wholesale trade (such as businesses that sell mining products), mining (such as gold ore mining), and utilities (such as power generation and replacing water supply and sewage system equipment). Table 6 summarizes the total quantity of mercury-contaminated debris generated by type of industry in 2001. In 2003, according to RCRAInfo data, about 95 percent of the total quantity of mercury-contaminated debris (about 3,781 metric tons) was generated by industries representing manufacturing (such as the textile and metals industries, remediation and waste management services, educational services (such as colleges and universities), utilities (such as electric power generation and replacing water supply and sewage system equipment), and government activities. Table 7 summarizes the total quantity of mercury-contaminated debris generated by industry type in 2003. With respect to the process or activity that generated the mercury- contaminated debris, RCRAInfo’s 2001 data reported that about 50 percent of the debris (about 4,006 metric tons) came from ongoing production and service processes. Remediation of past contamination and other intermittent events or processes generated about 19 percent (about 1,529 metric tons) and 13 percent (about 1,073 metric tons), respectively. Table 8 provides more information on the types of processes and activities that generated mercury-contaminated debris in 2001. In 2003, according to RCRAInfo data, the majority of the mercury- contaminated debris came from ongoing production and service processes and other intermittent events or processes, about 25 percent (about 1,001 metric tons) and about 41 percent (about 1,639 metric tons), respectively. Table 9 provides more information on the processes or activities that generated mercury-contaminated debris in 2003. According to RCRAInfo data, 18 states treated and disposed of 10,484 metric tons of mercury-contaminated debris in 2001. Ohio and Nevada treated and disposed of about 86 percent of the total quantity of mercury- contaminated debris (about 8,979 metric tons). Figure 8 compares the quantity of mercury-contaminated debris treated and disposed by state in 2001. In 2003, 26 states treated and disposed of 12,029 metric tons of mercury- contaminated debris, according to RCRAInfo data. Alabama, Missouri, Nevada, and Ohio treated and disposed of about 75 percent of the total quantity of mercury-contaminated debris (about 9,078 metric tons). Figure 9 summarizes the quantity of mercury-contaminated debris treated and disposed in each state during 2003. Q1. Which of the following hazardous debris treatment standards has your state implemented? Our state has no hazardous debris treatment standards (percent) (percent) l. Other ruptured devices with particle size exceeding 60 mm (for example, regulator) n. Process residuals (for example, smelter slag) Q2. Assume that the items listed below are different types of mercury-containing hazardous waste (D009, U151). Does your state allow treatment and/or disposal of the following wastes using hazardous debris or hazardous waste treatment standards? Q3. If there are others types of mercury-containing hazardous debris treated and/or disposed in your state, please provide a brief description in the box below. Q4. Does your state collect data on hazardous waste more frequently than required for EPA's Biennial Hazardous Waste Report? (percent) (percent) (percent) (percent) (percent) (percent) (percent) (percent) (percent) (percent) (percent) (percent) 1. During the past 5 years has your landfill accepted any hazardous mercury-containing waste (D009, K071, K106, U151, P065, P092)? (Please check one.) Yes Please continue to Question 2 on the next page. No You do not need to complete additional questions. Please fax this page to: 202-512-2502 or 202-512-2514. Attention: Diana Cheng. For your convenience, a fax cover sheet is on the last page. 2. Assume that the items listed below are different types of mercury-containing hazardous waste (D009, U151) and that these wastes were received from facilities that generated 1,000 kg or more of hazardous waste per month. Would your facility treat and/or dispose of any of the following wastes according to alternative debris treatment standards? (Please check one answer in each row.) Would this waste be treated and/or disposed of according to alternative debris treatment standards? d. Intact drums with at least 75% of their original volume f. Intact fluorescent light bulbs g. Ruptured fluorescent light bulbs i. Ruptured batteries with particle size exceeding 60 mm Ruptured batteries with particle size less than or equal j. k. Other intact devices (for example, thermometer, regulator) l. Other ruptured devices with particle size exceeding 60 mm (for example, regulator) m. Other ruptured devices with particle size less than or equal to 60 mm (for example, thermometer) n. Process residuals (for example, smelter slag) 3. In the space below, please add any comments you wish to make concerning your answers in Question 2. 11 respondents provided comments. 4. During the past 5 years, has your landfill had any instances where you refused to accept mercury-containing debris? Yes Please go to Question 5 No Please go to “Instructions for Returning” at the bottom of this page. Uncertain Please go to “Instructions for Returning” at the bottom of this page. 5. (If Yes to Question 4.) Please describe those instances when your landfill refused to accept mercury-containing debris. If possible, please include a description of the material(s) involved and the reason(s) for refusing the material(s). (You may use the space below, or attach another page.) Reasons cited by respondents included one or more of the following: 1. Landfill does not accept wastes fluorescent light bulbs, switches, and batteries (N=5). 2. Mercury waste from medical, containing a mercury concentration greater than 260 milligrams per kilogram. biological, or infectious waste (N=1). 2. Landfill does not accept metallic mercury. 3. Landfill permit prohibits medical waste in landfill. In addition to the individual named above, J. Erin Lansburgh, Assistant Director; Diana Cheng; Anthony Fernandez; Richard P. Johnson; Jessica Marfurt; Lynn Musser; George Quinn; Kim Raheb; Carol Herrnstadt Shulman; and Jena Sinkfield made key contributions to this report.
The Environmental Protection Agency (EPA) is responsible for regulating hazardous wastes (such as mercury) under the Resource Conservation and Recovery Act (RCRA). Under RCRA, mercury-containing hazardous waste must meet specific treatment standards before land disposal. But, certain difficult to manage waste due, in part, to its large particle size, can follow alternate "debris" standards that provide diverse treatment options. This report examines (1) the mechanisms that EPA uses to track the treatment and disposal of mercury-contaminated debris and the quantity of this waste, (2) the extent to which EPA, states, and industry share a common understanding of the types of mercury-containing wastes that can be treated and disposed of as debris, and (3) EPA and state controls that are in place to monitor compliance with EPA's treatment and disposal requirements for mercury-contaminated debris. EPA uses its RCRAInfo database to maintain information on all hazardous waste, including mercury-contaminated debris. EPA reported that in 2003, mercury-contaminated debris constituted about 12,000 metric tons--or about 0.4 percent of all mercury-containing waste and about 0.03 percent of all hazardous waste. However, EPA's data on mercury-contaminated debris may be incomplete. Reporting on the physical form of the waste (debris is one of many physical forms) is optional, and businesses did not submit this optional information in about 9 percent of instances when they reported treating and disposing of mercury-containing waste in 2003. In addition, EPA's reporting category for debris does not provide a complete list of items that EPA considers to be debris, and debris can be reported in other categories. The 48 states and the District of Columbia and the 14 commercial hazardous waste landfill operators that responded to our survey do not share a common understanding of the types of mercury-containing waste that EPA allows to be treated and disposed of as debris. For example, in their responses, officials in 21 states and operators of 6 commercial hazardous waste landfills identified as debris waste that is explicitly not debris, such as intact devices containing mercury, and may have used the debris regulations for such waste. Consequently, EPA cannot be certain that businesses are appropriately managing their mercury-containing waste as debris. EPA's mandatory waste tracking and documentation requirements serve as controls to monitor compliance with EPA's treatment and disposal requirements for mercury-contaminated debris. EPA and state oversight inspections and enforcement programs provide additional compliance monitoring with the alternative treatment standards for debris.
Located just north of the equator in the Pacific Ocean are the two island nations of the FSM and the RMI. The FSM is a grouping of 607 small islands in the western Pacific that lie about 2,500 miles southwest of Hawaii (see fig. 1). The FSM has a total land area of about 270 square miles and is comprised of four states—Chuuk, Pohnpei, Yap, and Kosrae—with an estimated total 2000 population of 107,000, according to FSM officials. The RMI is made up of more than 1,200 islands, islets, and atolls, with a total land area of about 70 square miles. The Marshall Islands are located in the central Pacific about equidistant from Hawaii, Australia, and Japan. The Marshall Islands had a 1999 total population of 50,840, according to the RMI census. During the Second World War, the United States engaged in a Pacific campaign that liberated the islands of Micronesia from Japanese control. To administer these islands, the United Nations created the Trust Territory of the Pacific Islands in 1947. The United States entered into the trusteeship with the U.N. Security Council and became the administering authority of the four current states of the FSM, as well as the Marshall Islands, Palau, and the Northern Mariana Islands. The U.N. trusteeship agreement made the United States financially and administratively responsible for the region. In addition, the agreement, which designated this Trust Territory as a strategic trusteeship, granted the United States the ability to establish military bases, station armed forces, and close off any area of the Trust Territory for security reasons. During Senate consideration of this agreement, Secretary of State George C. Marshall, General Dwight D. Eisenhower, and Admiral Chester A. Nimitz, among others, remarked that the agreement gave the United States the complete and exclusive military control over the islands that was necessary to deny other militaries access to the islands and prevent their use as a springboard for aggression against the United States. In 1986, the United States entered into the Compact of Free Association with the FSM and the RMI. Through this Compact, the two Pacific Island nations became Freely Associated States and were no longer subject to U.S. administration under the U.N. Trust Territory of the Pacific Islands. The Compact, which consists of separate international agreements with each country, was intended to achieve three principal goals: (1) secure self- government for each country; (2) assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency; and (3) ensure certain national security rights for the FSM, the RMI, and the United States. The defense and security relationships between the United States and the FSM and the RMI are governed by Title Three of the Compact of Free Association and three Compact-related agreements—the Status of Forces Agreement, the Military Use and Operating Rights Agreement, and the Mutual Security Agreement. The provisions of Title Three expire in 2001, but they can remain in effect during a 2-year negotiating window that ends in 2003. If Title Three is not renewed by 2003, the Mutual Security Agreement enters into force and preserves key aspects of the defense and security relationship between these countries. There are four primary U.S. defense rights and responsibilities contained in Title Three of the Compact and the Military Use and Operating Rights Agreement between the United States and the RMI (see app. II for a listing of other defense provisions contained in Title Three of the Compact that are due to expire in 2001): Title Three obligates the United States to defend the FSM and the RMI against an attack or the threat of attack in the same way it would defend itself and its own citizens. According to officials at DOD, this defense guarantee is stronger than the U.S. commitment to defend its North Atlantic Treaty Organization (NATO) allies from outside aggression. If no agreement is reached with the FSM and the RMI on extending Title Three’s defense provisions, the United States retains a lesser, albeit significant, obligation to defend the islands through its mutual security agreements with each country. Title Three provides the United States with the right of “strategic denial,” the ability to prevent access to the islands and their territorial waters by the military personnel of other countries or the use of the islands for military purposes. This right does not expire with Title Three, because the mutual security agreements between the United States and the FSM and the RMI contain this right. Title Three also grants the United States a “defense veto” over actions by the governments of the FSM or the RMI that the United States determines are incompatible with its authority and responsibility for security and defense matters in these countries. Unlike the U.S. obligation to defend the islands and the right of strategic denial, the U.S. defense veto will expire in 2001 unless Title Three of the Compact is extended (or 2003 if negotiations are ongoing for an additional 2 years). Finally, through the Military Use and Operating Rights Agreement with the RMI, the United States secured continued access to military facilities on Kwajalein Atoll. At the time the Compact was negotiated (1969-1986), the United States was concerned about the use of the FSM and the RMI as springboards for aggression against the United States, as they were in World War II, and the Cold War incarnation of this threat—the Soviet Union. Australia, New Zealand, and the United States practiced a coordinated Pacific-wide policy of strategic denial. This policy was successful in preventing the Soviet Union from establishing a diplomatic mission in the Pacific islands until 1990, when it did so in Papua New Guinea, and limited Soviet efforts to establish economic ties and enter into commercial fishing agreements. The United States and its allies blocked these diplomatic and economic efforts by the Soviet Union out of concern that closer relations with Pacific island governments could eventually lead to Soviet political involvement in and military access to the region. However, since the Cold War ended, the security environment in the Asia Pacific region has changed. The coordinated Pacific-wide policy of strategic denial ended with the dissolution of the Soviet Union; and the United States does not exhibit the same degree of concern about the influence of other foreign governments in the Pacific islands today. For example, China has seven embassies in Pacific Island countries, conducted $168 million worth of bilateral trade with the South Pacific in 1999, reportedly provided millions of dollars in economic assistance, and built a civilian space launch tracking facility in Kiribati—an island nation southeast of the FSM and the RMI. Taiwan also has a presence in some Pacific Island nations through diplomatic and economic ties and annual port visits by navy cadets. However, while China and Taiwan may have made greater diplomatic and economic inroads into the Pacific than the Soviet Union did, they lack the military power projection capabilities that defined the Soviet threat. The former Soviet Union was considered an expansionist superpower with a large “blue water,” or ocean-going, navy that was oriented toward the Pacific and capable of threatening the United States and its allies. In contrast, China, for instance, is currently considered to be a regional military power without a developed blue water naval capability or power projection capabilities that extend out far beyond its coastal waters. While the United States has maintained facilities on Kwajalein Atoll for military use, it has not exercised its other primary defense rights nor has it been required to fulfill its responsibilities contained in the Compact: (1) it has not had to defend the FSM and the RMI from an attack or the threat of an attack; (2) it has not invoked its right to deny access to the islands by foreign militaries or for military purposes; and (3) it has never had to veto an action by either the FSM or the RMI because the action was incompatible with the U.S. responsibility and authority for defense and security matters. As a result, these provisions remain untested. The United States has made extensive use of its access rights on Kwajalein Atoll in the RMI, which it secured through the Military Use and Operating Rights Agreement with that country (see fig. 2). The United States regularly conducts intercontinental ballistic missile (ICBM) tests, missile defense tests, and space tracking operations from facilities on the atoll that are under the authority of the U.S. Army (see fig. 3). Several ICBM tests are held annually. Regarding missile defense testing activity, a seventh national missile defense test was held in December 2001. Finally, equipment on the atoll is used for space-related activities such as observing space objects and tracking foreign launches (see app. III for more detailed information on U.S. operations and facilities on Kwajalein Atoll). According to DOD officials, the United States has never had to defend the FSM or the RMI. DOD and Department of State officials have also stated that the United States has never invoked its right of strategic denial or utilized its defense veto. However, a May 2001 port visit by three Taiwanese naval vessels in the RMI almost provided a test of these two provisions. In January 2001, the government of the RMI sought approval from the U.S. government for a 3-day port visit by the Taiwanese ships. The United States denied this request in a diplomatic note but did so without mentioning the strategic denial or defense veto provisions of the Compact. Even though the United States did not cite these provisions in its written denial, the RMI, in its reply, argued that the strategic denial and defense veto provisions were not appropriate in this case and that the government’s ability to conduct its own foreign relations must be respected. The United States dropped its objection to the proposed visit following this appeal and a February 2001 port visit by these same ships in Palau. (See app. IV for time lines detailing the history of the four principal Compact defense provisions.) Continued access to the Kwajalein Atoll in the RMI is the compelling U.S. defense or security interest in the FSM and the RMI that U.S. officials have identified. U.S. facilities located on Kwajalein complement the geographic characteristics that have helped to make the atoll an important part of U.S. ICBM testing, missile defense testing, and space surveillance operations. From a broader regional security perspective, the FSM and the RMI are not currently strategically important to the United States. In addition, other defense and security interests cited by U.S. officials, such as the right of strategic denial, the proximity of vital transit routes, and support in the United Nations, have been overstated. Senior U.S. policymakers agree that continued access to missile testing and space-tracking facilities on the Kwajalein Atoll in the RMI is the most important U.S. defense interest in the FSM and the RMI. DOD has described the U.S. Army facility on Kwajalein Atoll, known as the Ronald Reagan Ballistic Missile Defense Test Site, as an important and unique national asset that would be difficult and expensive to replace. In addition, the DOD agency responsible for missile defense testing, the Ballistic Missile Defense Organization (now the Missile Defense Agency), has determined that currently no acceptable alternative site exists for missile defense testing against ICBM class threats. The atoll has been the test site for ballistic missile systems for decades. The facility, which is one of two sites listed in the 1972 Anti-Ballistic Missile (ABM) Treaty between the United States and the Soviet Union, is used for long-range missile defense testing among other missions. According to the U.S. Army Space and Missile Defense Command (SMDC), the testing range’s remote ocean location in a sparsely populated area provides an acceptable environment for ballistic missile testing with minimal environmental impact; and the atoll’s location near the equator is beneficial for space object and foreign launch observation. To support missile testing activities, Kwajalein Atoll has become the home to sophisticated radar, optics, and telemetry equipment (see fig. 4). From a more regional or global point of view, the FSM and the RMI currently play no role in the execution of U.S. defense and security strategy. The East Asia Strategy Report, published periodically by DOD since 1990, refers to these countries as U.S. defense obligations, not as U.S. defense assets. Congressional hearings on U.S. defense and security issues in the Asia Pacific region since 1997 have been devoid of references to these countries. In addition, the United States has never officially responded to an offer the FSM, Guam’s neighbor, made in 1998 to preposition military forces in its territorial waters. Portions of a 1999 DOD Assessment of U.S. Defense and Security Interests in the region provided to us by the FSM and the RMI also concluded the United States has no current requirement to preposition equipment in either of these countries. Finally, the former and current ambassadors to these countries, as well as representatives from DOD’s Pacific Command, have told us that the FSM is no longer strategically important to the United States, while the RMI only remains important because of Kwajalein. In 2001, two reports called for increasing the U.S. presence in the Western Pacific, but neither offered any definite role for the FSM or the RMI. DOD’s 2001 Quadrennial Defense Review, released on September 30, sets out a new strategic vision for defense planning purposes. The review noted the U.S. overseas presence posture, concentrated in Western Europe and Northeast Asia, was inadequate for the new strategic environment in which U.S. interests are global and potential threats are emerging in other areas of the world. The report called for, among other things, increasing U.S. presence in the Western Pacific. As a result, the Navy will increase its air craft carrier presence in the Western Pacific and explore basing options for an additional three to four naval combat vessels and guided cruise missile submarines in that area, while the Air Force will ensure that sufficient refueling and logistics support capabilities are in place. DOD has stated that the FSM and the RMI may not ultimately be involved in any of the above decisions. A 2001 RAND report on U.S. force posture in Asia reached some of the same conclusions as the Quadrennial Defense Review when it highlighted Guam as the most suitable location for an increased U.S. Air Force presence in the region. In addition to Kwajalein, U.S. policymakers have cited three main U.S. interests in the FSM and the RMI: strategic denial, sea lines of communication, and support from the FSM and the RMI for U.S. positions in the U.N. General Assembly. However, assessments concerning strategic denial and its contribution to U.S. security are mixed. Furthermore, our analyses concluded that the effect of strategic denial, the importance of sea lines of communication in the region, and the degree of support received from the FSM and the RMI for U.S. positions in the United Nations have been overstated. First, there is a lack of consensus about the value of strategic denial in the post-Cold War era. Different elements of DOD and the Department of State have offered a range of opinions on the subject, calling the policy everything from “essential” to “irrelevant.” In the past 3 years, strategic denial has been described as “essential” to counter future uncertainty in the region, by the Office of the Assistant Secretary of Defense for International Security Affairs; “a very real interest,” if not as urgent as during the Cold War, by the Assistant Secretary of State for East Asia and the Pacific; “a prudent insurance policy” for U.S. security in the Pacific, by the Department of State’s Office of Compact Negotiations; and “a policy of the past” that is “irrelevant now with the end of the Cold War,” by the Commander in Chief, Pacific Command. Furthermore, statements that have overstated the scope of strategic denial raise questions about the value assigned to this U.S. right and its contribution to U.S. defense and security interests. Strategic denial only covers the land and the 12-mile territorial waters around each island of the FSM and the RMI (see fig. 5). The geographic limits of strategic denial were defined by section 461(c) of the Compact, which states that the FSM and the RMI include the land and water areas to the outer limits of the territorial sea and air space above such areas as recognized by the United States. The United States, as a result of its acceptance of most of the provisions in the 1982 U.N. Convention on the Law of the Sea as customary law, recognized the 12-nautical mile limit for the FSM and the RMI’s territorial seas and therefore for strategic denial. However, various statements by U.S. and foreign officials have described strategic denial as exclusive U.S. military control over a large, contiguous area of the Pacific Ocean. Specifically, An official in the Office of Compact Negotiations at the Department of State described strategic denial as “the most significant U.S. interest at the time the Compact was negotiated” because of the value placed on denying military access to “over half a million miles of the Pacific Ocean between Hawaii and Guam” in a paper presented at the 2001 Island State Security Conference. The Assistant Secretary of State for East Asia and the Pacific testified at a 1998 congressional oversight hearing on the Compact that strategic denial “means taking a vast stretch of the Pacific and maintaining U.S. military control and ensuring we could deny access to the ships of other countries.” A staff briefing paper submitted for the record during the same 1998 hearing stated that the U.S. right of strategic denial and defense veto gave the United States “exclusive military rights and legal defense veto over third party use of any land, ocean, or airspace of the islands.” This paper stated that strategic denial included the islands’ 200-mile exclusive economic zone, “an area larger than the continental United States” (see fig. 5). The RMI Minister of Foreign Affairs and Trade testified in the 1998 hearing that the RMI “provides the United States strategic denial rights over 1 million square miles of the Central Pacific.” Finally, in a paper presented at the 2001 Island State Security Conference, the Executive Director of the Joint Committee on Compact Economic Negotiations for the FSM stated that “strategic denial and the defense and security concessions in the Compact established an internationally recognized U.S. zone of influence covering the 1,000,000 square miles of the FSM’s exclusive economic zone in the western Pacific.” These statements, if taken literally, not only overstate the scope but also the effect of strategic denial. While the right of strategic denial prohibits third countries from establishing land-based operations in the FSM and the RMI, the United States cannot use this right to prevent ships from conducting military activities outside of the 12-mile territorial waters of these countries. For example, in the mid-1980s and early 1990s, there were numerous reports of Russian trawlers collecting information in the waters around Kwajalein. Further, the United States recognizes that under international law and custom, military vessels have a right to “innocent passage” through the coastal waters of the islands. According to DOD and the Department of State, these rights are identical to those that the United States exercises in its own territorial waters. However, Department of Defense officials have noted that in denying third-country access to land facilities, the right of strategic denial limits the ability of other nations to undertake long-term naval operations in the area, and makes activities in the region, such as surveillance, more costly. The importance of sea lines of communication, or sea routes, that run near or through the FSM and the RMI is another area in which the value of U.S. interests has been overstated. While U.S. policymakers have stated that the critical commercial and military transit routes run near or through the FSM and the RMI, there is evidence to the contrary. Officials from the Department of the State and the U.S. Army in the RMI told us that one of Kwajalein’s positive qualities was its isolated location, away from commercial shipping lines. In addition, a 1992 analysis of U.S. defense interests in the Pacific Islands stated that the FSM and the RMI lie well to the south of many north Pacific sea and air lines in peacetime; it is only when these north Pacific lines are threatened that air and sea movements would shift southward to minimize adversary interdiction. Our analysis of U.S. trade flows in the Pacific supports these two assessments. Of the less than 23 percent of total U.S. trade that crosses the Pacific, more than 61 percent (or about 14 percent of total U.S. trade) involves Japan, China, Taiwan, and Korea, all of which lie north of the FSM and the RMI. Other discussions of Pacific sea lines by U.S. officials and policy analysts have concentrated on chokepoints in Southeast Asia (see fig. 6). An analysis of these chokepoints in a 1996 National Defense University publication, stated that in the event all the strategic straits in Southeast Asia are closed or blocked, trade flows originating from the Middle East and South Asia could be rerouted south of Australia. Depending on the final destination of these goods, the rerouted ships could possibly pass near or through the FSM. Although the chokepoints analysis does not specifically illustrate how U.S. trade flows from this area would be affected, it appears they would transit south of the FSM and the RMI in this scenario. Finally, the level of support from the FSM and the RMI for U.S. positions in the U.N. General Assembly has been overestimated. Although U.N. voting does not directly relate to U.S. defense and security interests, U.S. government officials consistently referred to the support of these countries in the United Nations as one aspect of the strategic importance of these countries to the United States. In fact, an official in the Department of State’s Bureau of International Organizations called the FSM “the number one friend of the United States at the United Nations,” while the RMI was referred to as “one of the better members” of the General Assembly. These assessments were based on measures of voting coincidence that appeared in the 2000 edition of the department’s report to Congress, Voting Practices in the United Nations. In 2000, the FSM was said to have voted with the United States 100 percent of the time, while the RMI was credited with casting an identical vote about 74 percent of the time. However, the Department of State’s methodology does not take into consideration those occasions when the countries were absent or abstained from voting. Including these absences and abstentions drops the countries’ voting coincidence numbers to about 54 percent and 52 percent, respectively (see table 1). While these countries have agreed with the United States about as often as the average NATO country, support on a few issues identified as important by the Department of State, such as votes involving the Middle East and other issues where the United States is often isolated, and the numbers reported in the Department of State report have led to a perception of much stronger support than our analysis indicates (see app. V for more discussion of the Department of State report Voting Practices in the United Nations). The ongoing Compact negotiations have resulted in agreements in principle between the United States and the FSM and the RMI, respectively, to continue their existing defense and security relationships. Without a renewal of the Compact’s defense provisions, one of the four primary U.S. defense rights and responsibilities will completely expire at the end of the negotiating period in 2003—the U.S. defense veto. U.S. officials believe that continued economic assistance is important to reaching a final agreement on renewing the Compact’s defense provisions, providing a favorable environment for the United States to exercise its defense rights, such as strategic denial and Kwajalein access, and advancing U.S. interests. All parties to the current Compact negotiations have expressed their intent to preserve the status quo on defense and security matters. During negotiations with each country, the United States and the FSM and the RMI, respectively, have issued joint statements calling for the continuation of the defense and security relationship set forth in Title Three of the Compact. If such an agreement is reached, the U.S. defense veto would be extended as well as the U.S. obligation to defend these countries as the United States defends itself and its citizens. If an agreement on economic assistance is not reached by 2003, the defense veto will expire; and the United States will retain a lesser, albeit still significant, obligation to defend the FSM and the RMI. According to a representative from DOD’s Pacific Command, U.S. defense interests would not be hurt by the loss of the defense veto. Finally, the United States has already secured continued access to Kwajalein through 2016, by exercising its option to unilaterally extend the Military Use and Operating Rights Agreement with the RMI. Officials from the Department of State’s Office of Compact Negotiations have indicated that the agreement in principle to extend the defense and security provisions contained in Title Three is part of a package (as indicated in the joint statements signed by the parties) that would also include continued U.S. economic assistance, as well as various other measures, such as increased accountability over the use of Compact funds. In addition, statements from both DOD and the Department of State have described linkages between continued economic assistance and the ability of the United States to exercise its defense rights. A June 2001 statement, by a representative from the Department of State’s Office of Compact Negotiations, argued that continued economic assistance was justified by U.S. interests such as strategic denial, political and economic stability, support for U.S. positions in international and regional organizations, access to Kwajalein, and the need to provide a positive context for the United States to exercise its defense rights. Similarly, in a June 2000 congressional hearing on the Compact, an official from the Office of the Assistant Secretary of Defense for International Security Affairs, stated that providing continued Compact assistance was in the best interest of the United States because it helps preserve access to key defense interests for our forces while denying potentially hostile forces access to U.S. economic and defense interests in the region. Finally, the Executive Director of the Joint Committee on Compact Economic Negotiations for the FSM, has stated that the defense rights delegated to the United States under the Compact are linked to the economic assistance provided by the United States. Furthermore, it is the FSM’s position that the economic, political, and security goals of the Compact are closely interrelated; thereby making continued economic assistance an important part of the sustained political development and economic advancement necessary to attain the mutual security goals of the FSM and the United States. We provided a draft of this report to DOD, the Department of State, and the Department of the Interior, as well as the governments of the FSM and the RMI, for comment. The Departments of State and the Interior chose not to provide comments on the draft report. Regarding its decision not to submit comments, the Department of State said that it had been working with us since June 2001, when we presented this material in briefings to congressional staff, and had, during that time, made its views on U.S. defense interests in the FSM and the RMI known. DOD emphasized in its comments that the U.S. right to exclude third-country militaries from the territory of the FSM and the RMI remains an important one due to future uncertainty about events in the region. It also noted that it would be unwise to assume that the end of the Cold War has lessened the strategic importance of Micronesia to U.S. interests. In our response to DOD’s letter, we cite a passage from a DOD assessment that states that the strategic importance of the FSM and the RMI has in fact lessened over the past 50 years. The FSM government also disagreed with our conclusion that the FSM currently lacks broad strategic importance for the United States and that the importance of certain security interests involving the FSM has been overstated. In its comments, the RMI government stressed its strategic significance and historic contribution to the United States as a site of nuclear test and argued that the rights granted to the United States under the Compact have been significant. The RMI government also expressed the view that we have not properly characterized the relationship between the United States and the U.N. Trust Territory of the Pacific Islands at the time of Compact negotiations and thus had overemphasized the U.S. desire to address Cold War concerns in the Compact, while de-emphasizing the role other issues played in the negotiations. We disagree with most points made by the FSM and RMI governments; and, in responding to comments from these two countries, have made reference to report passages that support our views. The RMI government also stated that we should distinguish between economic assistance provided to the FSM and the RMI. We agree, and have provided separate assistance figures for each country. Comments received from DOD, as well as the FSM and RMI governments, and our assessments of them are included in appendixes VI through VIII. We are sending copies of this report to the Secretary of Defense, the Secretary of State, the Secretary of the Interior, the President of the FSM, the President of the RMI, and interested congressional committees. We will also make copies available to other interested parties upon request. If you or your staff have any questions regarding this report, please call me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix IX. In June 2001, we briefed the staffs of the Chairman of the House Committee on Resources, the Ranking Minority Member of the House Committee on International Relations, the Chairman of the Subcommittee on East Asia and the Pacific, House Committee on International Relations, and Congressman Bereuter on defense and security issues related to the 1986 Compact of Free Association and the ongoing negotiations taking place between the United States and the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI). Specifically, our briefing addressed (1) whether and how the United States has exercised its defense rights and fulfilled its defense responsibilities under the Compact, (2) the current U.S. defense and security interests in the FSM and the RMI, and (3) defense and security issues that are being addressed in the ongoing Compact negotiations. Since June, we have conducted additional audit work in response to questions raised during those briefings. These questions prompted us to address the uniqueness of the U.S. obligation to defend these islands, the influence of foreign governments in the region, and the utility of some defense provisions in the current Asia-Pacific security environment. To address this objective, we reviewed the Compact’s defense provisions (Title Three) as well as the related defense agreements (the mutual security agreements, the military use and operating rights agreements, and the Status of Forces Agreement) and discussed these documents with Department of Defense (DOD) and Department of State officials to identify the principal defense and security provisions. We also reviewed the congressional hearing record on the Compact, going back to 1984 oversight hearings, to determine the specific defense and security provisions that were focused on in statements and discussions. We then discussed the degree to which the U.S. government has invoked its defense rights or discharged its defense responsibilities with these same officials (DOD agencies interviewed included the Office of the Secretary of Defense, the Office of the Joint Chiefs of Staff, the U.S. Pacific Command, the U.S. Army Space and Missile Defense Command, and the Ballistic Missile Defense Organization. Department of State offices interviewed included the Bureau of East Asia and Pacific Affairs, the Office of the Legal Advisor, and the Office of Compact Negotiations). We also interviewed Department of State officials and reviewed Department of State documentation pertaining to the recent visit of Taiwanese ships to the RMI and Palau. To review U.S. operations at Kwajalein Atoll, we visited the atoll islands of Kwajalein and Roi-Namur in April 2000 to tour the facilities and discuss DOD activities on the islands with U.S. government officials, including the range Commander, as well as contractor personnel. We also visited the nearby island of Ebeye and toured facilities built by the U.S. military, as well as housing used by relocated mid-atoll Marshallese citizens. We also discussed Ebeye development projects involving schools, hospitals, and infrastructure improvements, with local development authority officials and reviewed associated documentation. To conduct our work in this area, we discussed U.S. defense and security interests in the FSM and the RMI with officials from DOD, the Department of State, the Central Intelligence Agency, and the Defense Intelligence Agency. Furthermore, we obtained the views of the former U.S. Ambassador to the FSM and the current and former U.S. ambassadors to the RMI. We reviewed DOD reports (2001 Quadrennial Defense Review, East Asia Strategy Reports and classified assessments of U.S. defense and security interests in the countries), statements by the Commander in Chief of the U.S. Pacific Command, congressional testimony on the Compact (from 1984 through 2000) and U.S. defense interests in the Asia Pacific region (from 1998 through 2000), and the 2001 RAND report entitled The United States and Asia: Toward a New U.S. Strategy and Force Posture. We also reviewed the legislative history of the Compact. For our examination of the scope and effect of strategic denial, we reviewed the 1982 United Nations (U.N.) Convention on the Law of the Sea, received a legal interpretation of the relevant Compact provision from DOD and the Department of State, located and examined statements in the congressional record, interviewed officials from DOD and the Department of State, and worked with the National Imagery and Mapping Agency to produce a map of the territorial boundaries of the FSM and the RMI. For our examination of important sea lines of communication, we reviewed statements from congressional hearings on the Compact that referred to these sea routes; analyzed Department of Commerce data on U.S.-Pacific trade flows (2000 data on total trade by U.S. Pacific ports); explored the issue of chokepoints in academic papers, government documents, and Chokepoints: Maritime Economic Concerns in Southeast Asia; and studied works on U.S. interests in the region written by former U.S. officials, such as The United States and the Pacific Islands, by John Dorrance. Finally, for our examination of voting in the U.N. General Assembly, we analyzed the data on the voting behavior of the FSM and the RMI contained in the Department of State’s annual report Voting Practices in the United Nations and data from the United Nations on voting margins, and interviewed current and former Department of State officials from the Bureau of International Organization Affairs, the Bureau of East Asia and Pacific Affairs, and the U.S. Mission to the United Nations. To address our third objective, we examined the Compact and its related agreements to determine the status of certain defense provisions after 2001; interviewed officials from DOD and the Department of State as well as the FSM and RMI governments; and reviewed joint communiqués issued in 2001 by the U.S. government and the governments of the FSM and the RMI, regarding negotiating principles related to defense matters. We performed our work at various points from December 1999 through October 2001, simultaneously with our efforts for other related assignments. Our work was conducted in accordance with generally accepted government auditing standards. As mentioned in this report, the defense veto contained in the Compact’s Title Three is due to expire in 2001 (or 2003 if negotiations continue for an additional 2 years). In addition to this provision, Title Three contains other provisions that are due to expire in 2001 or 2003. These include provisions stating that: The government of the United States shall not, in the FSM or the RMI, test by detonation or dispose of any nuclear weapon, nor test, dispose of, or discharge any toxic chemical or biological weapon. The government of the United States may invite members of the armed forces of other countries to use military areas and facilities in the FSM or the RMI, in conjunction with and under the control of U.S. armed forces. If, in the exercise of its authority and responsibility under Title Three, the government of the United States requires the use of areas within the FSM or the RMI in addition to those for which specific arrangements are concluded, it may request the government concerned to satisfy those requirements through leases or other arrangements. The FSM or RMI governments shall sympathetically consider any such request and shall establish suitable procedures to discuss it with and provide a prompt response to the U.S. government. The government of the United States shall provide and maintain fixed and floating aids to navigation in the FSM and the RMI at least to the extent necessary for the exercise of its authority and responsibility under Title Three. Subject to the terms of the Compact and related agreements, the government of the United States, exclusively, shall assume and enjoy, as to the FSM and the RMI, all obligations, responsibilities, rights and benefits of any defense treaty or other international security agreement applied by the government of the United States as Administering Authority of the Trust Territory of the Pacific Islands as of the day preceding the effective date of the Compact and any defense treaty or other international security agreement to which the government of the United States is or may become a party that it determines, after appropriate consultation with the FSM or RMI government, to be applicable to the FSM or the RMI. Any citizen of the FSM or the RMI shall be eligible to volunteer for service in the armed forces of the United States. (Of note, volunteers must meet the required mental, physical, and moral qualifications to join the U.S. armed forces. For 1998, 42 FSM citizens and 8 RMI citizens enlisted in the U.S. Army). The government of the United States shall have enrolled, at any one time, at least two qualified students, one each from the FSM and the RMI, in each of the U.S. Coast Guard Academy and the U.S. Merchant Marine Academy. The governments of the United States and the FSM or the RMI shall establish two Joint Committees empowered to consider disputes under the implementation of Title Three and its related agreements. In addition to these Title Three provisions, the Military Use and Operating Rights Agreement with the FSM, which authorizes up to four Civic Action Teams (CAT) in the FSM, will also expire in 2001. There are currently three CATs in the FSM. CATs are to conduct activities that focus on the special development needs of the country and are to provide training for the local population in general engineering skills. CAT teams work on projects (such as roads and school improvements) that the host governments identify. The teams are composed of one officer and 12 enlisted men and are shared between the Army, Navy, and Air Force. The CAT team budget for fiscal year 2001 was close to $2 million, according to a DOD official. This official told us that while CAT costs are to be shared between the United States and host governments, the United States has not been receiving the required funds from the FSM. The U.S. government has raised concerns that CAT teams are idle too much of the time and work on projects that quickly fall into disrepair. The United States has maintained a military presence in the Marshall Islands for several decades, and DOD currently conducts ballistic missile and missile defense testing on Kwajalein Atoll. U.S. equipment on the atoll also allows for space observation, identification, and tracking activities. The U.S. government, which provides funding to the landowners of the Kwajalein Atoll through the RMI government and is a key employer in the RMI, has also experienced some difficulties with the local Marshallese population. No recent studies have been completed regarding whether there is an acceptable alternative site to Kwajalein Atoll for all U.S. defense-related activities conducted there. The United States has had a military presence on the Marshall Islands in the central Pacific Ocean since liberating the islands from the Japanese in 1944 in Operation Flintlock. The U.S. government conducted nuclear tests in these remote islands near the equator during the 1940s and 1950s, and a military base was constructed on Kwajalein Atoll to support this testing (see fig. 7). In 1959 Kwajalein was selected as the testing site for the NIKE- ZEUS Anti-Missile System. In 1964, control of this missile testing range was transferred from the U.S. Navy to the U.S. Army. During the 1960s and 1970s, the range was used to test rocket systems such as NIKE-ZEUS, Sprint/Spartan, and Minuteman. Kwajalein Atoll is currently home to missile and missile-defense testing and space tracking facilities that use land provided to the U.S. government under a Compact-related agreement, the Military Use and Operating Rights Agreement with the RMI. In September 1999, the U.S. government exercised its right to unilaterally extend the agreement, giving the United States access to Kwajalein Atoll until 2016. The RMI government pays Kwajalein Atoll landowners for U.S. use of the atoll. Most U.S. equipment is located on the Kwajalein Atoll islands of Kwajalein and Roi-Namur, though some equipment is maintained on five other islands in the atoll. The U.S. testing range on Kwajalein is under the authority of the U.S. Army Space and Missile Defense Command (SMDC), and as of June 2001, it became officially known as the Ronald Reagan Ballistic Missile Defense Test Site. The test site, which is government owned and contractor operated, is home to about 75 U.S. government personnel as well as about 1,600 contractor staff and 1,000 family members. SMDC estimates that the facility represents a $4 billion investment. The U.S. range on Kwajalein Atoll is used for intercontinental ballistic missile (ICBM) testing (see fig. 8). One ICBM system that currently uses Kwajalein for testing is the Minuteman III. Three tests per year of this system, which was developed in the late 1950s, occur at Kwajalein. Another ICBM system is the Peacekeeper (one test/year), the newest U.S. ICBM strategic weapon system. Furthermore, this range is used for long-range missile defense testing. Seven national missile defense tests have been conducted, with the most recent test in December 2001. Range equipment is also used to conduct space observation, identification, and tracking activities. The range has provided more than 32,000 observations for updating the catalog of near-earth and deep-space objects. It also responds to assignments for the tracking of new foreign launches (commercial and military, announced and unannounced) and provides radar images of high-interest satellites. The facility also supports the National Aeronautics and Space Administration’s manned and unmanned space operations and experiments. To support these activities, the missile range at Kwajalein possesses a unique collection of technical equipment. The core of the range’s instrumentation is the Keirnan Reentry Measurements Site, a sophisticated radar suite. The radar sensors are located on the island of Roi-Namur. Data are collected across the radar frequency spectrum with a high degree of accuracy and are analyzed by the Massachusetts Institute of Technology’s Lincoln Laboratory and other facilities. The Kwajalein range also has ground-based optics such as tracking instruments, ballistic cameras, and documentary photography systems. In addition, twelve antennas are used to receive, record, and process flight data. Furthermore, the range has a deep-water acoustic sensor array located in the ocean area off the east reef of Kwajalein Atoll that can determine the precise location of reentry vehicle impacts. A submersible vehicle is also available to locate debris within the Kwajalein lagoon. Finally, the range has a launch site on Meck Island, with additional launch facilities on other RMI islands as well as Wake Island. The U.S. military presence on Kwajalein Atoll has led to tension with the Marshallese population on the atoll over the years. For example, there were four periods of protests by Marshallese landowners of the Kwajalein Atoll prior to enactment of the Compact. According to an SMDC official, these protests occurred because landowners were concerned that (1) the U.S. government was not paying enough for its use of various Kwajalein Atoll islands and (2) following enactment of the Compact, all payments to landowners and future negotiations regarding use of the atoll would be conducted on a government-to-government basis, bypassing any direct dealings with landowners. During the protests, the landowners occupied Kwajalein Atoll islands, including Kwajalein and Roi-Namur. While no major missile tests were delayed or cancelled as a result of the protests, two test missions scheduled for August 1979 were cancelled. Protests during the 1980s reportedly disrupted the community on Kwajalein and put a strain on security forces. Furthermore, the range is a top employer in the Marshall Islands, with about 1,400 Marshallese employed at the facility and earning a higher wage than is reportedly available elsewhere in the country. These Marshallese who are employed at the range are generally not permitted to live on Kwajalein Island and so live on the small nearby island of Ebeye. In addition, in 1965 the U.S. Army relocated Marshallese citizens living on mid-atoll islands to Ebeye so that ballistic missile testing could be conducted more safely within the mid-atoll area. Ebeye is severely overcrowded, with more than 9,300 people living on about 90 acres of land (see fig. 9). Efforts to improve the quality of life on the island, such as the provision of electricity and potable water, have experienced failures in recent years. Conditions on the island have reportedly deteriorated over the last decade, though numerous efforts are now being planned or are under way to improve the quality of life on Ebeye. In 1979, DOD conducted an analysis of possible locations for relocating U.S. facilities on Kwajalein Atoll. Key criteria used to determine the best alternative site were political supportability, land availability, and population distribution. DOD determined that the Northern Mariana Islands were the best alternative to Kwajalein Atoll for establishing a major DOD test range, with an estimated investment cost of over $2 billion (in 2000 dollars). Other alternative sites were located in the state of Chuuk in the FSM and in Kiribati, located southeast of the RMI. DOD officials now view this study as outdated. DOD has not conducted a detailed study examining potential alternative sites for all the activities undertaken at that U.S. facility on Kwajalein since 1979. However, the DOD agency responsible for missile defense testing, the Ballistic Missile Defense Organization, has conducted some analysis of alternatives to Kwajalein and has determined that currently no acceptable alternative site exists for missile defense testing against ICBM class threats. The United States acquired four primary defense and security rights and responsibilities as a result the Compact of Free Association and its related agreements: (1) the obligation to defend the FSM and the RMI against attack; (2) the right to deny access to foreign militaries and foreign military activity (strategic denial); (3) the right to prevent the FSM and the RMI governments from acting in a way that is incompatible with U.S. authority and responsibility for defense and security matters (defense veto); and (4) the right to use certain land (i.e., Kwajalein Atoll in the RMI) for military purposes. Figures 10 through 13 describe the legal provisions in which these rights and responsibilities are contained, the extent that each provision has been used since the Compact was enacted in 1986, and what happens to each provision if agreement is not reached on its renewal before the negotiating period ends in 2003. While the support of the FSM and the RMI for U.S. positions in the United Nations is not directly related to U.S. defense and security interests, U.S. officials cite this support and support in other international fora as a reason for why these islands are strategically important to the United States. The primary source that officials refer to in these statements is the Department of State’s annual report on Voting Practices in the United Nations. This report, which has used a consistent methodology to compare the votes cast by countries in the U.N. General Assembly with U.S. votes, has been incorrectly interpreted and used to overstate the level of support provided by the FSM and the RMI. By excluding instances where these countries were absent or abstained in their interpretation of this report, officials have overlooked the report’s cautionary message in its methodology section. The report indicates that abstentions and absences are often difficult to interpret, but they make a mathematical difference, sometimes major, in the voting coincidence results. The case of Palau, a country near the FSM and the RMI, illustrates this point. An official in the Bureau of International Organizations at the Department of State characterized Palau as the number-two friend of the United States in the General Assembly because of its 100-percent voting coincidence with the United States in 2000. However, this percentage is based on just 11 identical votes cast out of a possible 65 votes (about 17 percent) because Palau’s 2 abstentions and 52 absences are not included in the voting coincidence percentage. Table 1 in our letter shows these differences for the FSM and the RMI. The FSM and the RMI have also gained visibility by supporting the United States on important issues in the General Assembly. Each year the Department of State report highlights about 13 votes (or 18 percent of total General Assembly votes) on issues the United States considers important, such as arms control, Middle East issues, and human rights. On some of these issues, the United States is one of only a few dissenters, making FSM or RMI support highly visible. However, officials from the Department of State and the FSM concede that this support has more symbolic significance than actual significance given the overwhelming margins on these votes. For instance, during the 2000 General Assembly, the FSM joined the United States and Israel as the only dissenters on a resolution concerning the risk of nuclear proliferation in the Middle East (the RMI was one of only 8 countries who abstained), and the RMI joined the United States and Israel as the only dissenters on a resolution critical of the U.S. embargo of Cuba (the FSM was absent). The vote totals for these resolutions were 157-3-8 (for-against-abstain) and 167-3-4 respectively. Table 2 contains information on the number of votes the FSM and the RMI have cast that are either identical to or the opposite of United States votes on important issues, as well the number of times they have abstained or been absent, since their induction into the United Nations in 1991. While the level of support by the FSM and the RMI for U.S. positions in the U.N. General Assembly has been overstated, these countries have, in recent years, achieved a level of support that resembles the average North Atlantic Treaty Organization (NATO) country (see figs. 14 through 16). A closer look at the voting profiles of the FSM, the RMI, and NATO also reveals the importance of acknowledging abstentions and absences in measures of voting coincidence. For example, in 2000, the FSM cast 35 votes (out of 65) identical to those cast by the United States, while the RMI and the average NATO country each cast 34 identical votes. However, due to the exclusion of abstentions and absences from its calculations, the Department of State reported voting coincidence percentages for the FSM, the RMI, and the average NATO country that ranged from about 100 percent for the FSM to 74 percent for the RMI to 63 percent for NATO. The following are GAO’s comments on the letter from DOD dated December 7, 2001. 1. This report does not determine whether the right of strategic denial is essential. However, we did find that (1) strategic denial has never been invoked (see p. 13); (2) there is a lack of consensus among U.S. policymakers concerning its value in the post-Cold War era (see p. 17); and (3) the scope and effect of this right have been overstated in public statements by officials from the United States, the FSM, and the RMI (see p. 18). 2. The cited DOD study reached a different conclusion on how the strategic importance of Micronesia has changed over time than the one reported in DOD’s letter. According to unclassified portions of this 1999 assessment, the strategic importance of Micronesia to the defense of American national interests has clearly lessened in the 50 years since World War II. The study explained that this is the result of a number of factors, including the advent of intercontinental, nuclear-armed missiles; refuelable, long-range aircraft; and ballistic missile-carrying submarines, as well as increases in the operating range and at-sea endurance of America’s surface naval forces. Finally, the study stated that the end of the Cold War appears to have removed the only current blue-water navy and long-range aviation threat to U.S. forces in the Pacific. This is consistent with our conclusion about the FSM and the RMI’s current lack of broad strategic importance to the United States (see pp. 16-17). The following are GAO’s comments on the letter from the government of the FSM dated November 19, 2001. 1. We consulted with DOD on numerous occasions following September 11, 2001, to ensure that the conclusions reached in this report still reflected current U.S. interests in the Asia Pacific region. In addition, on pp. 16-17 we discuss the Quadrennial Defense Review, which, when it was released on September 30, set out a new strategic vision for defense planning purposes. Finally, with regard to the future Asia Pacific security environment, DOD did not, in response to our request, provide us with specific information on how the Compact’s defense provisions might aid the United States in its response to potential threats such as conflict on the Korean Peninsula, territorial disputes in the South China Sea and Northeast Asia, and separatism in Indonesia. 2. We acknowledge that such statements have and continue to be made by senior U.S. policymakers. We examined these statements and gave officials at both DOD and the Department of State the opportunity to provide specific examples of the FSM’s strategic importance in the current security environment. In response, we received some general information, such as the views of several DOD officials concerning the utility of strategic denial, which are noted on p. 19. However, this information did not change the conclusion we reached on pp. 16-17 about the FSM currently lacking broad strategic importance for the United States. DOD reached a similar conclusion in its 1999 assessment of U.S. interests in the Freely Associated States (FAS) – the FSM, the RMI, and Palau. See p. 52, comment 2 for additional information on DOD’s conclusion. 3. We maintain that the effect of strategic denial, the importance of sea lines of communication in the region, and the degree of support received from the FSM for U.S. positions in the United Nations have been overstated. See p. 63, comments 9-12 and 14, and p. 64, comment 16 for more detailed information. 4. Our report concludes that continued U.S. access to facilities on the Kwajalein Atoll in the RMI is both the key and compelling U.S. defense interest in the FSM and the RMI. In this context, “key” means important, fundamental. Kwajalein is the most important U.S. defense interest in the two countries. In this context, “compelling” means convincing. Among the U.S. interests commonly cited in the FSM and the RMI, including Kwajalein, strategic denial, sea lines of communication, and U.N. support, we found the evidence supporting Kwajalein’s importance most convincing. 5. See comment 2. We repeatedly sought to obtain the factual and detailed underpinnings that support Dr. Campbell and Mr. Smith’s statements, but DOD did not provide this information. 6. See comment 1. The focus of this report is on current U.S. defense interests in the FSM and the RMI. However, we examined the potential role of the FSM and the RMI with regard to the Quadrennial Defense Review on pp. 16-17 and noted DOD’s views that the FSM and the RMI may not be included in DOD plans to increase the U.S. presence in the Western Pacific. 7. The official U.S. Pacific Command position on the strategic importance of the FSM is classified. This information, as well as classified details of the 1999 DOD Assessment of U.S. Interests in the FAS, was presented to the requesters of this report in June 2001. Further, the views from the U.S. Pacific Command officials, which are included in our report, were collected from individuals specifically designated by the Command to respond to our questions. 8. The U.S. Pacific Command has told us that the United States has access to airfields throughout the FAS. According to the Air Mobility Operations Center within the Pacific Command Air Force Operations Center, the U.S. Air Force (USAF) does not use Yap and Chuuk for refueling as a matter of routine operations. As a rule, when USAF fighters transit the Pacific Command’s Area of Responsibility they have tanker escort, which means they do not need to refuel on the ground. The Air Mobility Operations Center does not keep track of fighter refueling and cannot verify whether any USAF fighters have refueled at Yap or Chuuk. However, a Pacific Command representative stated that it is quite possible that the Marines or Navy could have dropped in for fuel, but there is no way to provide an accounting of how many times and for what reasons. He stated that there have been literally hundreds of flights transiting the Pacific Command’s Area of Responsibility in support of Operation Enduring Freedom. U.S. pilots are given a great deal of latitude; and, given countless possible scenarios, some could have dropped into the FSM to refuel. Finally, he stated that Palau and Guam have also been used recently for refueling purposes, with Guam used more often than small airfields when it is convenient. However, carrier-based aircraft transiting the Pacific Command’s Area of Responsibility might find a more direct route by flying through the FSM or the RMI rather than Guam. 9. This report does not conclude that the value of strategic denial is overrated. However, we did find that (1) strategic denial has never been invoked (see p. 13); (2) there is a lack of consensus among U.S. policymakers concerning its value in the post-Cold War era (see p. 17); and (3) the scope and effect of this right have been overstated in public statements by officials from the United States, the FSM, and the RMI (see p. 18). Finally, with regard to the effective area of strategic denial, we acknowledge both the right to deny land access and the potential effect of this denial on the ability of other countries to conduct long- term naval operations (p. 19). 10. See p. 21, footnote 24 for a discussion of U.S. trade flows. Our analysis illustrates that most U.S. Pacific trade passes well north of the FSM and the RMI. 11. Of note, the examples offered here ignore the fact that strategic denial would prevent the use of the FSM by a third-party military to threaten Guam or sea lines. Also, in response to our questions on how U.S. interests would be affected if a third-party military had a presence in the FSM or the RMI, DOD focused exclusively on potential surveillance activities, not threats to Guam or shipping. 12. We have carefully examined these route charts and determined that they support our finding that the major sea lines of communication between the United States and Guam as well as key trading partners in Asia run north of the FSM and the RMI. In addition, we have amended footnote 23 on p. 21, to note that there are sea lines running between Australia and Japan that transit the FSM. Finally, these route charts appear to show that major sea lines between the United States and Australia lie close to 2000 kilometers away from the nearest point in the FSM. 13. We provided a classified briefing to our requesters in June 2001. This briefing included a discussion of contingency war plans and operational scenarios, as well as other information gathered from U.S. defense and intelligence agencies. 14. These specific instances of FSM support for U.S. positions were included in the draft provided to the FSM for comment (see app. V, p. 45, footnote 40, and p. 46, footnote 45). 15. Information on FSM citizens serving in the U.S. armed forces was included in the draft provided to the FSM for comment (see app. II, p. 31). 16. This report recognizes that absences and abstentions are an issue for all the coincidence numbers reported in the Department of State report in p. 23, footnote 25. Our methodology, which accounts for these absences and abstentions, was described as fair and valid by a former Department of State official who prepared the U.N. voting report for the past 12 years. We also compare the FSM favorably with U.N. support received from NATO on p. 23 (see also app. III, pp. 47-48). Our analysis of NATO voting on pp. 47, 48, and 50 applies the same methodology that we used to calculate the voting coincidences of the FSM and the RMI. Finally, we discuss FSM support on issues identified as important by the Department of State on pp 46-47. However, we note that lopsided vote margins in the U.N. General Assembly means this support is largely symbolic. The following are GAO’s comments on the letter from the government of the RMI dated November 19, 2001. 1. This report mentions two incidents where the defense veto, though not formally invoked, may have had some relevance. On p. 13 in footnote 14, we note that the RMI government once considered a plan to store third country nuclear waste in the RMI. The threat of the United States possibly invoking the defense veto, along with a change in RMI government leadership, may have been responsible for the RMI government’s final decision against providing storage. Furthermore, on p. 13 we noted initial U.S. government objections to a 2001 RMI port call by Taiwanese ships. While the U.S. government never mentioned the Compact’s defense veto provision during this incident, the RMI did cite the provision as not being appropriate in this particular instance. Finally, our objective was to determine which of the key provisions had been formally invoked, and both DOD and the Department of State have told us the defense veto remains unused. This report also does not attempt to assign value to the defense veto provision based on its lack of usage. 2. This report chose not to speculate on the role strategic denial may have played in deterring third country militaries from seeking to use the RMI, because data are not available to determine whether there are any third countries that would have had an interest in engaging in activities in the RMI in the absence of this right. We maintain that the evidence shows that the scope and effect of strategic denial have been overstated in public statements by officials from the United States, the FSM, and the RMI (see p. 18). 3. This report does not conclude that hostile actors or foreign powers will never again attempt to transit RMI waters or develop a presence in the RMI. Rather we note on p. 13 in footnote 13 that portions of a 1999 DOD assessment, provided to the RMI, stated that no outside threat to the FSM and the RMI is likely to emerge over the next 10 to 20 years, and there are no compelling security interests on the part of any Asian countries that would manifest themselves in any threat to the FSM and the RMI. This assessment also stated that no Asian country will have the military reach to pose a credible threat or domineering presence in the foreseeable future. 4. This report does acknowledge the importance of nuclear and missile defense testing in the U.S.-RMI relationship. Appendix III discusses U.S. activities in the Marshall Islands over the past 50 years. This report also notes that Kwajalein Atoll is cited by DOD as an “important and unique national asset that would be difficult and expensive to replace,” thus giving prominence to the RMI location where missile and missile defense testing occurs (see pp. 3 and 14). Reviewing U.S. nuclear testing activities in the Marshall Islands was outside the scope of our review, though the issue is mentioned in appendix III. For a discussion of the amount the United States has spent to address nuclear testing-related issues in the Marshall Islands, see our report Foreign Relations: Better Accountability Needed Over U.S. Assistance to Micronesia and the Marshall Islands (GAO/RCED-00-67, May 31, 2000). 5. This report states that the negotiation of expiring defense and economic Compact provisions provides the United States with the opportunity to reexamine its defense and security interests in the RMI and the FSM. We believe that this is a reasonable and prudent course of action and one that in no way suggests that the United States should unilaterally choose to end or alter commitments that require mutual termination by all parties involved. Of note, in a 1996 testimony, a Department of State official stated that while U.S. defense arrangements with the Freely Associated States (FAS) – the FSM, the RMI, and Palau — have contributed measurably to the security of the United States and the FAS, it is necessary to review the entire range of Compact security provisions in light of new global conditions and stringent fiscal realities as we near the end of the Compact period. 6. The U.S. obligation to defend the RMI and the FSM is mentioned in this report on p. 7 in order to demonstrate the unique relationship between the United States and the two Pacific Island nations, and is never referred to as a burden. This report also states on p. 8 that strategic denial, the defense veto, and access to RMI land are key provisions of the Compact that provide rights to the U.S. government. 7. We agree that the factors discussed by the RMI government over the next three pages—the relationship that developed between the United States and the RMI as a result of U.S. administration of the Marshall Islands under the U.N. trust, and U.S. nuclear testing during the 1940s and 1950s—played a key role in establishing an important framework for Compact negotiations. However, we maintain, after carefully reviewing the Compact’s legislative history, that the Compact’s specific security and defense provisions reflected Cold War concerns that existed at the time of the negotiations. We also note that the RMI government did not disagree when we cited three goals of the Compact in a September 2000 report that are also listed in this report: (1) securing self-government for the RMI and the FSM, (2) assuring certain national security rights for the RMI, the FSM, and the United States; and (3) assisting the RMI and the FSM in their efforts to advance economic self-sufficiency. Our earlier report also noted U.S. concerns about an expanded Soviet Union military presence in the Pacific at the time of Compact negotiations. In addition, a 1999 DOD assessment points out that the Compact was negotiated during the Cold War era in a vastly different politico-military and security environment, and a State Department official testified at a 2000 congressional hearing that the Compact was negotiated and enacted during the Cold War, when the Soviet Union had a growing presence in the Pacific. 8. Pages 1 and 6 of this report state that the Compact consists of two separate international agreements, one between the United States and the RMI, the other between the United States and the FSM. 9. We have revised footnote 5 on p. 6 of the report to show direct Compact funding provided to the RMI and the FSM separately for fiscal years 1986 through 1998. Further, on p. 2, footnote 1 and p. 6, footnote 5, we have separated total estimated Compact assistance (direct funding as well as U.S. programs and federal services) for the RMI from the total estimated assistance provided to the FSM. In addition to those named above, Ron Schwenn, Mary Moutsos, Mark Speight, and Rona H. Mendelsohn made key contributions to this report. Compact of Free Association: Negotiations Should Address Aid Effectiveness and Accountability and Migrants’ Impact on U.S. Areas (GAO-02-270T, Dec. 6, 2001). Foreign Relations: Migration From Micronesian Nations Has Had Significant Impact on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands (GAO-02-40, Oct. 5, 2001). Foreign Assistance: Lessons Learned From Donors’ Experiences in the Pacific Region (GAO-01-808, Aug. 17, 2001). Foreign Assistance: U.S. Funds to Two Micronesian Nations Had Little Impact on Economic Development (GAO/NSIAD-00-216, Sept. 21, 2000). Foreign Assistance: U.S. Funds to Two Micronesian Nations Had Little Impact on Economic Development and Accountability Over Funds Was Limited (GAO/T-NSIAD/RCED-00-227, June 28, 2000). Foreign Relations: Better Accountability Needed Over U.S. Assistance to Micronesia and the Marshall Islands (GAO/RCED-00-67, May 31, 2000). The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The Compact of Free Association continues a defense arrangement that has existed between the United States and two Pacific island nations--Micronesia and the Marshall Islands--since the end of World War II. The United States has exercised only one of the four primary defense provisions contained in the Compact. That provision grants the United States the right to use part of the Kwajalein Atoll in the Marshall Islands to test nuclear missiles and space tracking operations. The United States has never been required to fulfill defense responsibilities under the other three defense provisions contained in the Compact. The Defense Department considers Kwajalein Atoll an important asset that would be costly and difficult to replicate. Ongoing negotiations over the Compact are following a course that would preserve the existing defense and security relationship between the United States and each of these nations.
Iodine, palladium, and iridium are the radioactive sources most commonly used in brachytherapy. The brachytherapy procedure is typically performed in the outpatient setting where, under the OPPS, costs associated with a procedure are generally bundled in order to promote hospital efficiency. However, since the OPPS was implemented in 2000, an increasing number of technologies have been paid separately. Except in 2003, the one year in which iodine and palladium used to treat prostate cancer and iridium were bundled into payment for brachytherapy procedures, all radioactive sources used in brachytherapy have been paid separately. Radioactive sources are used in brachytherapy to treat a variety of types of cancers. The most prevalent brachytherapy procedure is low-dose brachytherapy with iodine or palladium, which is typically provided for early-stage prostate cancer. During this procedure, approximately 20 to 200 tiny iodine or palladium sources are implanted in the prostate, deliver radiation over a period of months, and then remain permanently in the body. Generally, the choice between iodine and palladium is determined by the aggressiveness of the tumor, and the number of sources by the size of the prostate. In recent years, utilization of the high-dose brachytherapy procedure, which typically uses iridium, has grown. Iridium can be used to treat a variety of advanced-stage cancers—most commonly gynecological cancers. In high-dose brachytherapy, a single, highly radioactive iridium source is implanted in the tumorous area for a brief period—a matter of minutes or hours—and then withdrawn. Depending on a patient’s clinical needs, the patient may receive one or more such treatments, also known as fractions, with the same source over the course of several days. Because an iridium source emits sufficient radiation for 3 months, the same source can be used to treat multiple patients. The payment methodology for outpatient services has varied in the degree to which it relies on bundled payments to promote hospital efficiency. Prior to OPPS implementation in 2000, payment for outpatient items and services was not bundled; rather, hospitals were paid under a complex array of cost-based reimbursement methods and fee schedules. Generally, neither of these payment methodologies provides a strong incentive to furnish services efficiently. Under a cost-based methodology, each hospital is paid its cost based on information it reports to CMS. Under a fee schedule methodology, all hospitals receive a prospectively determined rate for each item and service they provide, but little incentive exists for them to provide only the necessary items and services. Under the Balanced Budget Act of 1997, CMS was required to implement the OPPS, which was designed to streamline the historically complex system of payment for outpatient care and better promote hospital efficiency. CMS assigns each outpatient procedure to one of approximately 850 ambulatory payment classification (APC) groups. Each APC group includes procedures that share cost and clinical similarities and has one payment rate for all procedures in the group. To set an APC rate, CMS uses historical claims to calculate a median cost across a group’s procedures that includes the costs of the associated bundled services and supplies, which are known as “packaged” costs. A median, rather than a mean, gives less weight to extreme values. That median cost is then converted into a numeric weight, which determines the payment hospitals receive for all procedures assigned to the APC. Because the OPPS provides a single payment to cover the average total cost of a procedure, the incentive for each hospital to efficiently provide the necessary items and services associated with that procedure is greater than when the hospital is paid its cost or a separate fee schedule payment for each item and service used in the procedure. Although bundling is a fundamental principle of the OPPS, the number of technologies that are paid separately from their associated procedures has increased since the implementation of the payment system. Beginning in 2000, the first year of the OPPS, CMS was required to make temporary, separate payments—referred to as “transitional pass-through payments”— for technologies that it determines to meet specified criteria for being new and high cost. These payments supplement the bundled payments for outpatient procedures associated with the technologies, and are designed to compensate hospitals for the additional cost. A new technology is eligible for pass-through payments for 2 to 3 years, after which time the technology is no longer considered new and CMS can include the technology in the payment bundle for the associated procedure. Over time, other high-cost technologies that are not new—mainly certain drugs and radiopharmaceuticals—have also been designated for separate payment either by Congress or by CMS. The payment methodology for radioactive sources associated with brachytherapy has changed several times since the inception of the OPPS. CMS was required to make separate pass-through payments for all radioactive sources associated with brachytherapy beginning in 2000. In 2003, these technologies were no longer eligible for pass-through payments. Because they are considered devices by Medicare, and devices are typically bundled into payment for their associated procedures, CMS bundled iodine and palladium into the payment bundle for the low-dose brachytherapy procedure for prostate cancer, and iridium into the payment bundle for the high-dose brachytherapy procedure, regardless of cancer type. For iodine and palladium sources provided for conditions other than prostate cancer, CMS continued to pay separately. Instead of paying separately for these radioactive sources at each hospital’s cost, CMS set prospective rates for 2003 based on the median cost of each source across hospitals. The MMA mandated that all brachytherapy sources be paid separately after 2003 and specified that from January 1, 2004, through December 31, 2006, separate payments for the sources be at each hospital’s cost. The MMA did not specify a methodology for paying separately after this date. When paying separately for technologies that are not new, CMS’s general practice is to set a prospective rate for all hospitals, based on an average unit cost across hospitals. However, certain technologies may vary in cost substantially and unpredictably or there may not be reasonably accurate data on which to base an average cost across hospitals. In either case, CMS pays for these technologies at each hospital’s cost. Although CMS does not use published criteria to determine payment amounts for separately paid technologies that are not new, we found that its general practice is to pay prospectively based on the average historical cost of each technology across hospitals. A prospective rate, even for technologies that are separately paid, is desirable because basing a rate on an average encourages those hospitals that provide the technology to minimize their acquisition costs. To set prospective rates for these separately paid technologies, CMS currently uses two sources of historical data: manufacturer data and OPPS claims. For example, CMS pays for certain high-cost drugs prospectively based on average per-unit acquisition cost. To calculate hospital acquisition cost, CMS relies on per-unit average sales price (ASP) data, which manufacturers are required to submit to CMS and are used in making payments for physician-administered drugs. CMS also uses ASP data to pay a per-unit rate for particular orphan drugs, which are drugs used to treat patients with rare conditions and are typically high in cost. For drugs where CMS does not have ASP data, CMS pays based on the mean cost calculated from OPPS claims. When a technology’s unit cost varies substantially and unpredictably, or when reasonably accurate cost data are not available, CMS pays for the technology at each hospital’s cost. If the cost varies substantially and unpredictably, a prospective rate based on a historical average may not adequately pay hospitals even if they operate efficiently. CMS pays each hospital’s cost, for example, for corneal transplant tissue and certain vaccines, including those for flu and pneumonia. CMS uses this methodology for corneal transplant tissue because, after analyzing data submitted by hospitals and other stakeholders, the agency determined that the fees eye banks charge hospitals for this tissue can vary substantially and unpredictably over time and across eye banks in a given year. The amount of the fee charged by an eye bank depends heavily on the level of charitable donations it receives, which it uses to subsidize the cost of providing the tissue. The cost to hospitals of providing vaccines also varies substantially and unpredictably due to instability in the nation’s vaccine supply. In other cases, CMS makes cost-based payments for technologies when it determines that reasonably accurate historical data on unit cost are not available. For example, the MMA mandated separate payment for certain radiopharmaceuticals. As we discussed in our 2006 report on OPPS payment for certain drugs and radiopharmaceuticals, differences among hospitals in how these technologies are purchased make it difficult for CMS to set a prospective rate based on an average cost across hospitals. As a result, payment for these radiopharmaceuticals is based on each hospital’s cost. Based on our analysis, the absence of wide variability in the unit costs of iodine and palladium and the availability of reasonably accurate historical data makes these radioactive sources suitable for prospective payment rates. We were unable to establish a unit cost for iridium and, as a result, could not identify a suitable payment methodology. CMS has OPPS claims data from hospitals that provided iridium, and would be able to use these data to calculate an average unit cost across hospitals and to identify which methodology is suitable for determining a separate payment amount. Our analysis suggests that CMS would be able to develop prospective rates for iodine and palladium beginning in 2007. Based on interviews we conducted with hospital and manufacturer officials, and the results of our hospital survey, we determined that iodine and palladium have identifiable unit costs and that these costs do not appear to vary substantially and unpredictably across hospital purchases at a given point in time or from year to year. Both hospitals and manufacturers told us that hospitals generally purchase iodine and palladium sources at a per-source price, making the calculation of a unit cost straightforward. According to our survey of 121 hospitals on the prices they paid during 1 year—specifically, from July 2003 through June 2004—the range of iodine and palladium prices is not wide. This is indicated by the relative level of precision— technically, the coefficient of variation—achieved for our estimated mean price. (See table 1.) We also note that iodine and palladium are not subject to the same supply and demand conditions as corneal transplant tissue and flu and pneumonia vaccines—conditions that lead to substantial and unpredictable cost variation from year to year. Although CMS uses ASP data to set a prospective rate for certain high-cost drugs, CMS currently does not have ASP data for radioactive sources used in brachytherapy. However, we found that OPPS claims provide a reasonably accurate source of data for setting a prospective rate for iodine and palladium sources. To determine if claims could be used as a reasonable data source, we compared the payment rates for 2003 and the proposed payment rates for 2004, which were based on median costs calculated from historical claims, with the median of the per-source purchase prices reported directly to us by hospitals. Although the payment rates applied only to sources used in non-prostate brachytherapy, CMS officials told us that they were calculated using prostate and non-prostate brachytherapy claims with iodine and palladium sources. We found that for iodine the prospectively set rate for 2003 and proposed rate for 2004 were $31.33 and $36.35, respectively, and the median of reported purchase prices was $25.37. For palladium, the prospectively set rate for 2003 and proposed rate for 2004 were $43.96 and $44.00, respectively, and the median reported purchase price was $45.46. Since 2004, when CMS was required to pay separately for all iodine and palladium sources, the agency has been accumulating claims data that include separate charges for these sources. As a result, CMS will have data from 2005 for the 2007 payment year. These data could be used to set prospective payment rates, either based on a mean—as is currently done with certain high-cost drugs—or based on a median—which CMS used to set the 2003 and proposed 2004 rates for iodine and palladium sources. Due to the reusable nature of the iridium source, identifying its unit cost is not as straightforward as identifying the unit cost of iodine and palladium. Over the course of its 3-month life span, an iridium source can be temporarily implanted in multiple patients and each of those patients can receive about 1 to 10 such treatments with the same source. Therefore, the appropriate unit cost of an iridium source is the per-treatment cost—the average cost of all treatments administered across all patients over a 3- month period. When hospitals purchase an iridium source, they may not know the exact number of patients they will treat or the number of treatments each of those patients will receive. Therefore, hospitals must bill Medicare based on projections of their unit cost, and will only be able to identify their actual unit cost retrospectively. We asked hospitals to provide the per-treatment cost of iridium sources they purchased over a previous 12-month period in order to identify a unit cost. However, we did not receive enough data to identify the per- treatment cost. Of 121 total hospitals surveyed, 19 responded with data on iridium, and the majority of these 19 hospitals did not provide data we could use to estimate the cost per treatment. Specifically, 11 either did not provide the number of treatments, reported a questionable source price, or both. Eight hospitals reported a source price and the number of treatments from which a unit cost could be calculated. However, among these 8 hospitals there were inconsistencies in the data provided. Some hospitals reported the total price of their iridium contracts, while other hospitals isolated the price of the radioactive source within their contracts and reported that price. Because we could not establish a unit cost, we could not assess if the unit cost of iridium varies substantially and unpredictably over time. Although we could not identify an average per-treatment cost from our survey data, CMS has OPPS claims data from hospitals that provided iridium. Using these data, CMS would be able to evaluate whether the range of costs comprising the average is substantial and whether the cost varied unpredictably. Such an analysis would help CMS identify a suitable methodology for determining a separate payment amount. Under the OPPS, an increasing number of technologies have been designated for separate payment, either by Congress or by CMS. Pursuant to the MMA, radioactive sources used in brachytherapy, including iodine, palladium, and iridium, are among those technologies. Based on our analysis, CMS can pay separately for iodine and palladium sources using prospective rates because the unit cost of the sources does not vary substantially and unpredictably. In addition, CMS has data available to identify reliable average costs across hospitals to set prospective payment rates beginning in 2007. Paying prospectively in this manner would help encourage hospital efficiency. However, we were not able to identify a suitable methodology for determining a separate payment amount for iridium sources because we did not receive sufficient information from hospitals to estimate an average per-treatment cost across hospitals. In order to identify a suitable methodology for determining a separate payment amount, CMS would be able to use OPPS claims data to evaluate whether the range of costs comprising the average is substantial and whether the average per-treatment cost varies unpredictably over time. In order to promote the efficient delivery of radioactive sources associated with outpatient brachytherapy, we recommend that the Secretary of Health and Human Services direct the Administrator of CMS to take the following two actions: Set prospective payment rates for iodine and palladium sources with each rate based on the source’s average—that is, the mean or median—unit cost across hospitals estimated from OPPS claims data. Use claims data to evaluate the unit cost of iridium so that a suitable, separate payment methodology can be determined. We received written comments on a draft of this report from CMS (see app. II). We also received oral comments from individuals at five organizations representing manufacturers of radioactive sources used in brachytherapy and providers of brachytherapy. These included the Coalition for the Advancement of Brachytherapy, which represents manufacturers of radioactive sources; the Association of Community Cancer Centers (ACCC), which represents hospitals that provide cancer treatment; and three organizations representing physicians and others involved in providing brachytherapy: the American College of Radiation Oncology (ACRO), the American Brachytherapy Society (ABS), and the American Society for Therapeutic Radiation and Oncology (ASTRO). We also received technical comments from CMS and the external reviewers, which we incorporated as appropriate. In reviewing our draft report, CMS stated that it appreciated our analysis and will consider our recommendations on iodine, palladium, and iridium as it develops payment policy for 2007. CMS also noted that we did not make recommendations on payment for other radioactive sources associated with brachytherapy that may be separately payable in 2007. As stated in our draft report, we examined how payment amounts for iodine, palladium, and iridium could be determined. In 2002, these three sources were billed on 98 percent of the claims for radioactive sources associated with brachytherapy. Medicare pays for seven other radioactive sources used in brachytherapy—gold-198, low-dose iridium, yttrium-90, cesium-131, liquid iodine-125, ytterbium-169, and linear palladium-102. We did not examine how payment for those sources could be determined because sufficient data on those sources were not available in the 2002 claims used to construct the sample of hospitals for our survey. Medicare did not pay for cesium-131, ytterbium-169, and linear palladium-102 in 2002, and gold-198, low-dose iridium, liquid iodine-125, and yttrium-90 together appeared on 2 percent of the approximately 22,000 claims for radioactive sources in that year. Although we did not examine how payment amounts could be determined for these seven sources, the analytical framework we used may apply to them as well. Comments from external reviewers representing manufacturers of radioactive sources and providers of brachytherapy centered on three different areas: our recommendation to pay prospectively for iodine and palladium sources; our recommendation that CMS evaluate the unit cost of iridium; and payment for radioactive sources other than iodine, palladium, and iridium. Representatives from CAB disagreed with our recommendation to set prospective rates for iodine and palladium using OPPS claims data. They asserted that price variation due to the range of available iodine and palladium products makes it inappropriate to pay for sources prospectively based on averages. In their opinion, our finding that the unit costs of iodine and palladium sources are generally stable was compromised by limitations in our hospital survey—specifically, our exclusion of outlier data and the absence of source configuration information in many of the surveys we received from hospitals. ACCC stated that OPPS claims data are flawed and that prospective rates may be appropriate but only when a more accurate data source is available. They also noted, as did ACRO representatives, that costs incurred by hospitals for storing and handling radioactive sources were not represented in our survey results. Representatives from ASTRO, ABS, and ACRO agreed with our recommendation that payment can be based on an average. ACRO representatives cautioned that the data used to set the payment must be representative of different types of hospitals, and ABS representatives suggested that the data should reflect the increased use of stranded sources, which they stated are more costly but considered clinically advantageous by many physicians. Regarding our recommendation that CMS use OPPS claims data to evaluate the unit cost of iridium in order to determine a suitable separate payment methodology, representatives from CAB said the report accurately conveys the difficulties of identifying a per-unit cost for iridium. However, they disagreed with our recommendation because they said it would not be possible for CMS to fully evaluate a unit cost using OPPS claims data, which they asserted to be flawed. They stated that the cost of iridium varies substantially and unpredictably and would not be appropriate for prospective payment based on an average. Representatives from ASTRO, ABS, and ACRO agreed with our recommendation, although they expressed confidence that the unit cost of iridium would be found to vary substantially and unpredictably and would therefore be inappropriate for prospective payment based on an average cost calculated across hospitals. Finally, other comments focused on payment for radioactive sources other than iodine, palladium, and iridium. Representatives of ASTRO and CAB noted that we did not specifically address payment for the other radioactive sources used in brachytherapy—gold-198, low-dose iridium, yttrium-90, cesium-131, liquid iodine-125, ytterbium-169, and linear palladium-102—and ASTRO asked whether we would be making recommendations on payment for these other radioactive sources. Concerning the comments that variation in source price makes it inappropriate to pay prospectively for sources, as noted in the draft report, we based our finding on the low coefficient of variation we calculated from surveys received from our representative sample of hospitals. We do not believe that our exclusion of outlier data masked the true degree of price variation. We used standard statistical trimming principles, which resulted in the exclusion of only 2 percent of reported purchases of iodine and none of the reported purchases of palladium. Although many of the responding hospitals did not indicate on the survey the configuration of the sources purchased, we instructed hospitals to list prices for all sources purchased during the survey period. Therefore, the variation we calculated from hospital responses can be expected to reflect the range of products purchased by hospitals at the time. Representatives from ACRO and ABS stated that they believed the average prices presented in the draft report were consistent with prices for the types of sources—loose, low-activity sources—commonly used during the survey period. If costlier stranded sources have become more frequently used since the survey period of July 1, 2003 through June 30, 2004, as stated by representatives of ACRO and ABS, the use of those sources would be captured in OPPS claims data from subsequent years and reflected in future prospectively set rates. Regarding the concerns about basing prospectively set rates for iodine and palladium on OPPS claims data, as noted in the draft report, we based our recommendation on our comparison of average purchase prices for those sources from our hospital survey with CMS payment rates for 2003 and proposed payment rates for 2004, which CMS derived from OPPS claims data. Concerning the comments about the cost of storing and handling radioactive sources, CMS has provided guidance to hospitals on how they can receive reimbursement for those costs. With respect to our recommendation on payment for iridium, as noted in the draft report, we are recommending that CMS use its claims data to evaluate whether the range of costs comprising the average for a given year is substantial across hospitals and whether this average unit cost varied unpredictably over time. Consistent with its general practice for paying separately for technologies that are not new, CMS could pay for iridium at each hospital’s cost if OPPS claims did not prove to be a reasonable source of data or if CMS determined that the unit cost varies substantially and unpredictably over time. As we noted in our response to comments received from CMS, we limited our examination of payment for radioactive sources to iodine, palladium, and iridium because sufficient data on the other sources were unavailable in the 2002 claims used to construct the sample of hospitals for our survey, and these three sources were billed on 98 percent of the claims for radioactive sources associated with brachytherapy. We are sending a copy of this report to the Administrator of CMS. We will also provide copies to others on request. The report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7119 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix summarizes the sample design, methods for collecting and processing the data, and methods for estimating mean and median purchase prices for iodine and palladium sources used in brachytherapy. Though we were not able to estimate mean and median purchase prices for iridium, this appendix also includes a discussion of the data we received. We developed a random sample of hospitals to survey for the purchase prices of iodine, palladium, and iridium sources used in brachytherapy. The sample frame consisted of 949 hospitals that (1) had charged Medicare for radioactive sources during 2002, the most recent year for which usable data were available; (2) were still Medicare providers on July 1, 2004; and (3) were a subset of sample hospitals drawn for a survey we conducted of hospital outpatient drug prices. The sampling frame contained 98 percent of the 968 hospitals that submitted Medicare claims for the three brachytherapy sources in 2002. We drew a sample of 121 hospitals from the sample frame, on the basis of an expected response rate of 50 percent. Our results can be generalized to the larger population of hospitals providing iodine and palladium in the outpatient setting and meeting the above criteria. To improve the precision of our estimates of mean and median purchase price, we stratified the sample of hospitals. The objective was to obtain a sample of hospitals that mirrored the distribution of hospitals billing Medicare for these sources. Because we did not have a measure of purchase price of radioactive sources at the time we selected the sample, we used total hospital outpatient drug charges to Medicare as a proxy for purchase price variation. We used a regression model to identify stratification factors (such as teaching hospital status) that would maximize the difference in mean purchase price (as proxied by Medicare drug charges) among strata. We grouped hospitals into major teaching hospital, nonmajor teaching hospital, urban nonteaching hospital, and rural nonteaching hospital strata. We placed small hospitals in a separate stratum to ensure that hospitals with no or minimal charges for drugs during the first 6 months of 2003 were appropriately represented. In our sample design, we defined a major teaching hospital as a hospital for which the ratio of residents to the average daily number of patients was at least 1 to 4 and a nonmajor teaching hospital as one having a ratio of residents to patients of less than 1 to 4. We defined a hospital as urban if it was located in a county considered a metropolitan statistical area (as defined by the Office of Management and Budget) and rural if it was located in a county not considered a metropolitan statistical area. We defined a small hospital as a hospital with total Medicare drug charges of less than $10,000 during the first 6 months of 2003. To develop our survey of hospital purchase prices for radioactive sources, we interviewed representatives from the Coalition for the Advancement of Brachytherapy (CAB). CAB reports that it represents manufacturers of 90 percent of all brachytherapy sources and 100 percent of high-dose rate brachytherapy sources in the United States. We also interviewed representatives of the American Brachytherapy Society, the American College of Radiation Oncology, the American Society for Therapeutic Radiology and Oncology, and the Association of Community Cancer Centers. We also interviewed representatives from six radioactive source manufacturers and seven hospitals and officials at the Centers for Medicare & Medicaid Services. In developing the survey, we obtained information from these associations and individual hospitals and pilot tested the survey with 5 hospitals prior to sending it to the entire sample of 121 hospitals. As a result, we clarified certain protocols and procedures but did not substantially change the survey instrument. The survey instrument was five pages long with one page for each radioactive source, one page for rebate data, and one page defining the terms in the previous pages. We collected data by reported purchase—that is, the purchase of a given quantity of a radioactive source at a particular price on a specific date. For iodine and palladium sources, we asked hospitals to provide the name of the manufacturer; the number of sources; the price per source; and certain characteristics of the sources purchased, such as radioactivity level. For iridium, we asked hospitals to provide the name of the manufacturer, the number of treatments delivered, the source price, and the rebate eligibility. We also asked hospitals to report information on any rebates they received for these purchases. We contracted with Westat to administer the survey. Westat began data collection on September 27, 2004. Key components of the data collection protocol were a first mailing to the chief executive officer or chief financial officer of each hospital explaining the survey, followed by a telephone call to identify the main point of contact; a second mailing to the main contact outlining the data that were needed and describing the options for submitting the data; a follow-up telephone call to facilitate the main contact’s understanding of the data collection, provide technical assistance as needed, and obtain some basic information about the hospital; and telephone calls at regular intervals to remind the hospitals to submit their data and to provide assistance as needed. Hospitals could submit data in one of three ways: by uploading electronic files through the study Web site, by sending an e-mail to the study address with data attached, or by sending electronic media or paper submissions through the mail. When our contractor received a brachytherapy survey from a hospital, it forwarded the survey to us for processing and analysis. Of the 121 hospitals surveyed, 62 hospitals submitted usable data, resulting in an overall response rate of 51 percent. We considered iodine and palladium data usable if we were able to identify the price per source and the number of sources purchased. We considered iridium data usable if we were able to identify the price per source and the number of fractions provided with the source. Of the 62 hospitals, 52 hospitals submitted usable data for iodine and 40 hospitals submitted usable data for palladium, with some providing data for both radioactive sources. Sixty- five percent of hospitals providing data for iodine and 63 percent of hospitals providing data for palladium were teaching hospitals. Our data were not sufficient to measure overall price differences by radioactivity level and other characteristics across each of the two types of sources. Specifically, hospitals did not indicate activity level for 37 percent of their reported purchases of iodine and 47 percent of their reported purchases of palladium. They did not indicate source configuration for 43 percent of their reported purchases of iodine and 51 percent of their reported purchases of palladium. Although we did not receive enough data from hospitals to reliably identify any price differences by source characteristic, we instructed hospitals to report all their purchases during the survey period. Therefore, any price variation due to source characteristic should be reflected in our data. We applied statistical trimming rules to eliminate outliers in the data. Accordingly, 2 percent of the reported purchases of iodine were trimmed, and none of the reported purchases of palladium were trimmed. The resulting data allowed us to calculate the mean and median price per source for iodine and palladium. Few hospitals reported receiving rebates. This is consistent with information we received from hospitals during interviews—that manufacturer rebates were not commonly provided for radioactive sources. Therefore, we did not factor rebates into our mean and median purchase prices. We determined that there were insufficient data to estimate the price of iridium. Of the 19 hospitals submitting iridium data, 11 either did not provide number of treatments, reported a questionable iridium source price, or both. Eight hospitals reported an iridium source price and the number of treatments from which a unit cost could be calculated. However, among these 8 hospitals there were inconsistencies in the data provided. Some hospitals reported the total price of their iridium contracts, which includes the cost of maintaining the iridium source, while other hospitals isolated the price of the iridium source within the contracts and reported that price. This section describes the rationale and method for weighting the hospital sample, calculating mean purchase price, calculating median purchase price, and calculating the associated coefficients of variation—or standard error reflecting sample design and weights. To estimate hospitals’ mean and median purchase prices for iodine and palladium sources, the sample hospitals’ purchase price data were weighted to make them representative of the sample frame of hospitals from which the sample was drawn. The less likely that a hospital was sampled, the larger its weight. For example, if each hospital had a 1 in 10 probability of being sampled, its sample weight was 10. That is, each hospital in the sample represents 10 hospitals in the sample frame. Consequently, if 5 hospitals in a sample bought a particular radioactive source, and the sample weight was 10, we estimate that 50 hospitals in the frame bought that radioactive source. In this report, we refer to sample weights as “hospital weights.” Our sample was stratified, so all hospitals in a particular stratum (for example, major teaching hospitals) had the same weight. Since in our sample the probability of a hospital’s being selected varied by stratum, hospitals in different strata had different weights. We calculated the hospital weight as Wjh denotes the hospital weight for the jth radioactive source in the hth Njh denotes the sample frame (the total number of hospitals) that according to Medicare outpatient claims, billed for the jth radioactive source in the hth stratum; and Rjh denotes the total number of hospitals in the hth stratum that purchased the jth radioactive source, according to their survey submissions. This weight recognizes that not all hospitals responded to our survey, since the weight’s denominator is Rjh—the number of hospitals that responded to the survey and indicated that they bought the jth radioactive source. To summarize hospitals’ purchase prices for iodine and palladium sources—reflecting purchases made, in many cases, at different prices and in different quantities—we calculated a mean purchase price for each radioactive source. This mean purchase price for a particular radioactive source is, in effect, a weighted mean. To reflect the differences among hospitals in purchase prices and purchase volumes, we used both the hospital weights and purchase volume as weighting variables in estimating the mean purchase price. All calculations were done at the individual purchase level but reflect the hospital and purchase volume weighting variables. Σ x*jhi) N represents the total number of hospitals in the hth stratum; n represents the size of the sample of hospitals in the hth stratum; y*jhi = Σ yjhik, which represents the total dollar amount for the jth radioactive source listed on the kth invoice for the ith hospital in the hth stratum; and x*jhi = Σ xjhik, which represents the total number of units for the jth radioactive source listed on the kth invoice for the ith hospital in the hth stratum. The equation estimates the mean purchase price of a radioactive source as the ratio of the total amount purchased in dollars to the total number of units purchased. ΣΣ}. To assess the precision of our estimates of the mean purchase price, we calculated coefficients of variation for the estimated mean purchase price. We also used the coefficients of variations as an indicator of price variability across hospitals. We estimated the mean purchase prices, median purchase prices, and the coefficients of variation for the means using specialized software for survey data analysis—SUDAAN®. In addition to the contact above, Maria Martino, Assistant Director; Shamonda Braithwaite; Melanie Anne Egorin; Hannah Fein; Nora Hoban; Dae Park; Dan Ries; Anna Theisen-Olson; Yorick F. Uzes; and Craig Winslow made contributions to this report.
Generally, in paying for hospital outpatient procedures, Medicare makes prospectively set payments that are intended to cover the costs of all items and services delivered with the procedure. Medicare pays separately for some technologies that are too new to be represented in the claims data used to set rates. It also pays separately for certain technologies that are not new, such as radioactive sources used in brachytherapy, a cancer treatment. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required separate payment for the radioactive sources. It also directed GAO to make recommendations regarding future payment. GAO examined (1) how Medicare determines payment amounts for technologies that are not new but are separately paid and (2) how payment amounts for iodine, palladium, and iridium sources used in brachytherapy could be determined. In paying separately for technologies that are not new, the Centers for Medicare & Medicaid Services (CMS) generally sets prospective rates based on the average unit cost of the technologies across hospitals. For example, CMS currently pays separate prospective rates for certain high-cost drugs based on the mean per-unit acquisition cost, as derived by CMS from data provided by drug manufacturers. A prospective rate is desirable because basing a rate on an average encourages those hospitals that provide the technology to minimize their acquisition costs. However, when CMS determines that the unit cost of a technology designated for separate payment varies substantially and unpredictably over time, or that reasonably accurate data are not available, it pays each hospital its cost for the technology. For example, CMS pays each hospital its cost for corneal transplant tissue, because it determined that the fees eye banks charge hospitals vary substantially and unpredictably. GAO's analysis suggests that CMS could set prospective payment rates for iodine and palladium because their unit costs are generally stable and CMS can base the payments on reasonably accurate data. According to interviews GAO conducted with hospitals and manufacturers, iodine and palladium have an identifiable unit cost that does not vary unpredictably over time. In addition, the results of GAO's survey of hospital purchase prices suggest that the unit cost of iodine and palladium does not vary substantially. Furthermore, GAO found that Medicare claims would be a reasonably accurate source of data for setting prospective rates for these sources. GAO was not able to determine a suitable methodology for paying separately for iridium. In contrast with iodine and palladium, which are permanently implanted in patients, iridium is reused across multiple patients, making its unit cost more difficult to determine. Although GAO surveyed hospitals on the unit cost of iridium, it did not receive sufficient data to identify and evaluate an average unit cost across hospitals. However, CMS has outpatient claims data from all hospitals that have used iridium. In order to identify a suitable methodology for determining a separate payment amount, CMS would be able to use these data to establish an average cost and evaluate whether the cost varies substantially and unpredictably.
Since the inception of SBInet, we have reported on a range of issues regarding program design and implementation. For example, in October 2007, we testified that DHS had made some progress in implementing Project 28—the first segment of SBInet technology across the southwest border—but had fallen behind its planned schedule. In our February 2008 testimony, we noted that although DHS accepted Project 28 and was gathering lessons learned from the project, CBP officials responsible for the program said it did not fully meet their expectations and would not be replicated. We also reported issues with the system that remained unresolved. For example, the Border Patrol, a CBP component, reported that as of February 2008, problems remained with the resolution of cameras at distances over 5 kilometers, while expectations had been that the cameras would work at twice that distance. In our September 2008 testimony, we reported that CBP had initially planned to deploy SBInet technology along the southwest border by the end of 2008, but as of February 2008, this date had slipped to 2011 and that SBInet would have fewer capabilities than originally planned. In September 2009, we reported that SBInet technology capabilities had not yet been deployed and delays required the Border Patrol to rely on existing technology for securing the border, rather than using the newer SBInet technology planned to overcome the existing technology’s limitations. As of April 2010, SBInet’s promised technology capabilities are still not operational and delays continue to require Border Patrol to rely on existing technology for securing the border, rather than using the newer SBInet technology planned to overcome the existing technology’s limitations. When CBP initiated SBInet in 2006, it planned to complete SBInet deployment along the entire southwest border in fiscal year 2009, but by February 2009, the completion date had slipped to 2016. The first deployments of SBInet technology projects are to take place along 53 miles in the Tucson border sector, designated as Tus-1 and Ajo-1. As of April 7, 2010, the schedule for Tus-1 and Ajo-1 had slipped from the end of calendar year 2008 as planned in February 2008, and government acceptance of Tus-1 was expected in September 2010 and Ajo-1 in the fourth quarter of calendar year 2010. Limitations in the system’s ability to function as intended as well as concerns about the impact of placing towers and access roads in environmentally sensitive locations have contributed to these delays. Examples of these system limitations include continued instability of the cameras and mechanical problems with the radar at the tower, and issues with the sensitivity of the radar. As of January 2010, program officials stated that the program was working to address system limitations, such as modifications to the radar. As a result of the delays, Border Patrol agents continue to use existing technology that has limitations, such as performance shortfalls and maintenance issues. For example, on the southwest border, Border Patrol relies on existing equipment such as cameras mounted on towers that have intermittent problems, including signal loss. Border Patrol has procured and delivered some new technology to fill gaps or augment existing equipment. We have also been mandated to review CBP’s SBI expenditure plans, beginning with fiscal year 2007. In doing so, in February 2007, we reported that CBP’s initial expenditure plan lacked specificity on such things as planned activities and milestones, anticipated costs, staffing levels, and expected mission outcomes. We noted that this, coupled with the large cost and ambitious time frames, added risk to the program. At that time, we made several recommendations to address these deficiencies. These recommendations included one regarding the need for future expenditure plans to include explicit and measurable commitments relative to the capabilities, schedule, costs, and benefits associated with individual SBI program activities. Although DHS agreed with this recommendation, to date, it has not been fully implemented. In our June 2008 report on the fiscal year 2008 expenditure plan, we recommended that CBP ensure that future expenditure plans include an explicit description of how activities will further the objectives of SBI, as defined in the DHS Secure Border Strategic Plan, and how the plan allocates funding to the highest priority border security needs. DHS concurred with this recommendation and implemented it as part of the fiscal year 2009 expenditure plan. In reviewing the fiscal year 2008 and 2009 expenditure plans, we have reported that, although the plans improved from year to year, providing more detail and higher quality information than the year before; the plans did not fully satisfy all the conditions set out by law. In addition to monitoring program implementation and reviewing expenditure plans, we have also examined acquisition weaknesses that increased the risk that the system would not perform as intended, take longer to deliver than necessary, and cost more than it should. In particular, we reported in September 2008 that important aspects of SBInet were ambiguous and in a continued state of flux, making it unclear and uncertain what technological capabilities were to be delivered and when. Further, we reported at that time that SBInet requirements had not been effectively developed and managed and that testing was not being effectively managed. Accordingly, we concluded that the program was a risky endeavor, and we made a number of recommendations for strengthening the program’s chances of success. DHS largely agreed with these recommendations and we have ongoing work that will report on the status of DHS’s efforts to implement them. We reported in January 2010 that key aspects of ongoing qualification testing had not been properly planned and executed. For example, while DHS’s testing approach appropriately consisted of a series of test events, many of the test plans and procedures were not defined in accordance with relevant guidance, and over 70 percent of the approved test procedures had to be rewritten during execution because the procedures were not adequate. Among these changes were ones that appeared to have been made to pass the test rather than to qualify the system. We also reported at this time that the number of new system defects identified over a 17 month period while testing was underway was generally increasing faster than the number of defects being fixed—a trend that is not indicative of a maturing system that is ready for acceptance and deployment. Compounding this trend was the fact that the full magnitude of this issue was unclear because these defects were not all being assigned priorities based on severity. Accordingly, we made additional recommendations and DHS largely agreed with them and has efforts underway to address them. Most recently, we concluded a review of SBInet that addresses the extent to which DHS has defined the scope of its proposed SBInet solution, demonstrated the cost effectiveness of this solution, developed a reliable schedule for implementing the solution, employed acquisition management disciplines, and addressed the recommendations in our September 2008 report. Although we plan to report on the results of this review later this month, we briefed DHS on our findings in December 2009, and provided DHS with a draft of this report, including conclusions and recommendations in March 2010. Among other things, these recommendations provide a framework for how the program should proceed. In light of program shortcomings, continued delays, questions surrounding SBInet’s viability, and the program’s high cost vis-à-vis other alternatives, in January 2010, the Secretary of Homeland Security ordered a department assessment of the SBI program. In addition, on March 16, 2010, the Secretary froze fiscal year 2010 funding for any work on SBInet beyond Tus-1 and Ajo-1 until the assessment is completed and the Secretary reallocated $50 million of the American Recovery and Reinvestment Act funds allocated to SBInet to procure alternative tested and commercially available technologies, such as mobile radios, to be used along the border. In March 2010, the SBI Executive Director stated that the department’s assessment ordered in January 2010, would consist of a comprehensive and science-based assessment of alternatives intended to determine if there are alternatives to SBInet that may more efficiently, effectively and economically meet U.S. border security needs. According to the SBI Executive Director, if the assessment suggests that the SBInet capabilities are worth the cost, DHS will extend its deployment to sites beyond Tus-1 and Ajo-1. However, if the assessment suggests that alternative technology options represent the best balance of capability and cost-effectiveness, DHS intends to immediately begin redirecting resources currently allocated for border security efforts to these stronger options. As part of our continuing support to the Congress in overseeing the SBI program, we are currently reviewing DHS’s expenditure plan for the fiscal year 2010 Border Security Fencing, Infrastructure, and Technology appropriation, which provides funding for the SBI program. Additionally, we are completing a review of the internal control procedures in place to ensure that payments to SBInet’s prime contractor were proper and in compliance with selected key contract terms and conditions. Finally, we are reviewing controls for managing and overseeing the SBInet prime contractor, including efforts to monitor the prime contractor’s progress in meeting cost and schedule expectations. We expect to report on the results of these reviews later this year. In addition to monitoring SBInet implementation, we also reported on the tactical infrastructure component of the SBI program. For example, in October 2007, we reported that tactical infrastructure deployment along the southwest border was on schedule, but meeting CBP’s fencing goal by December 31, 2008, might be challenging and more costly than planned. In September 2008, we also reported that the deployment of fencing was ongoing, but costs were increasing, the life-cycle cost for fencing was not yet known, and finishing the planned number of miles by December 31, 2008 would be challenging. We also reported on continuing cost increases and delays with respect to deploying tactical infrastructure. In September 2009, we reported, among other things, that delays co ntinued in completing planned tactical infrastructure primarily because of challenges in acquiring the necessary property rights from landowners. GAO, Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. (Washington, D.C.: Sept. 9, 2009). of deployment and operations and future maintenance costs for the fence, roads, and lighting, among other things, are estimated at about $6.5 billion. CBP reported that tactical infrastructure, coupled with additional trained agents, had increased the miles of the southwest border under control, but despite a $2.6 billion investment, it cannot account separately for the impact of tactical infrastructure. CBP measures miles of tactical infrastructure constructed and has completed analyses intended to show where fencing is more appropriate than other alternatives, such as more personnel, but these analyses were based primarily on the judgment of senior Border Patrol agents. Leading practices suggest that a program evaluation would complement those efforts. Until CBP determines the contribution of tactical infrastructure to border security, it is not positioned to address the impact of this investment. In our September 2009 report, we recommended that to improve the quality of information available to allocate resources and determine tactical infrastructure’s contribution to effective control of the border, the Commissioner of CBP conduct a cost-effective evaluation of the impact of tactical infrastructure on effective control of the border. DHS concurred with our recommendation and described actions recently completed, underway, and planned that it said will address our recommendation. In April 2010, SBI officials told us that the Homeland Security Institute was conducting an analysis of the impact of tactical infrastructure on border security. We believe that this effort would be consistent with our recommendation, further complement performance management initiatives, and be useful to inform resource decision making. This concludes my statement for the record. For further information on this statement, please contact Richard M. Stana at (202) 512-8777 or stanar@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Frances Cook, Katherine Davis, Jeanette Espinola, Dan Gordon, Kaelin Kuhn, Jeremy Manion, Taylor Matheson, Jamelyn Payan, Susan Quinlan, Jonathan Smith, Sushmita Srikanth, and Juan Tapia-Videla made key contributions to this statement. Secure Border Initiative: Testing and Problem Resolution Challenges Put Delivery of Technology Program at Risk. GAO-10-511T. Washington, D.C.: Mar. 18, 2010. Secure Border Initiative: DHS Needs to Address Testing and Performance Limitations that Place Key Technology Program at Risk. GAO-10-158. Washington, D.C.: Jan. 29, 2010. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-1013T. Washington, D.C.: Sept. 17, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: Sept. 9, 2009. U.S. Customs and Border Protection’s Secure Border Initiative Fiscal Year 2009 Expenditure Plan. GAO-09-274R. Washington, D.C.: Apr. 30, 2009. Secure Border Initiative Fence Construction Costs. GAO-09-244R. Washington, D.C.: Jan. 29, 2009. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1086. Washington, D.C.: Sept. 22, 2008. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1148T. Washington, D.C.: Sept. 10, 2008. Secure Border Initiative: Observations on Deployment Challenges. GAO-08-1141T. Washington, D.C.: Sept. 10, 2008. Secure Border Initiative: Fiscal Year 2008 Expenditure Plan Shows Improvement, but Deficiencies Limit Congressional Oversight and DHS Accountability. GAO-08-739R. Washington, D.C.: June 26, 2008. Department of Homeland Security: Better Planning and Oversight Needed to Improve Complex Service Acquisition Outcomes. GAO-08-765T. Washington, D.C.: May 8, 2008. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions GAO-08-263. Washington, D.C.: Apr. 22, 2008. Secure Border Initiative: Observations on the Importance of Applying Lessons Learned to Future Projects. GAO-08-508T. Washington, D.C.: Feb. 27, 2008. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: Oct. 24, 2007. Secure Border Initiative: SBInet Planning and Management Improvements Needed to Control Risks. GAO-07-504T. Washington, D.C.: Feb. 27, 2007. Secure Border Initiative: SBInet Expenditure Plan Needs to Better Support Oversight and Accountability. GAO-07-309. Washington, D.C.: Feb. 15, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Securing the nation's borders from illegal entry of aliens and contraband, including terrorists and weapons of mass destruction, continues to be a major challenge. In November 2005, the Department of Homeland Security (DHS) announced the launch of the Secure Border Initiative (SBI)--a multiyear, multibillion dollar program aimed at securing U.S. borders and reducing illegal immigration. Within DHS, the U.S. Customs and Border Protection (CBP) provides agents and officers to support SBI. As requested, this statement summarizes (1) the findings and recommendations of GAO's reports on SBI's technology, known as SBInet (including such things as cameras and radars), and DHS's recent actions on SBInet; and (2) the findings and recommendations of GAO's reports on tactical infrastructure, such as fencing, and the extent to which CBP has deployed tactical infrastructure and assessed its operational impact. This statement is based on products issued from 2007 through 2010, with selected updates as of April 2010. To conduct these updates, GAO reviewed program schedules, status reports and funding and interviewed DHS officials. Since the inception of SBInet, GAO has reported on a range of issues regarding design and implementation, including program challenges, management weaknesses, and cost, schedule, and performance risks; DHS has largely concurred with GAO's recommendations and has started to take some action to address them. For example, in October 2007, GAO testified that the project involving the first segment of SBInet technology across the southwest border had fallen behind its planned schedule. In a September 2008 testimony, GAO reported that CBP plans to initially deploy SBInet technology along the southwest border had slipped from the end of 2008 to 2011 and that SBInet would have fewer capabilities than originally planned. As of April 2010, SBInet's promised capabilities were still not operational. Limitations in the system's ability to function have contributed to delays. GAO has also reviewed CBP expenditure plans and found a lack of specificity on such things as planned activities and milestones. GAO made recommendations, including the need for future expenditure plans to include explicit and measurable commitments relative to the capabilities, schedule, costs, and benefits associated with individual SBI program activities. While DHS has concurred with GAO's recommendations, and its expenditure plans have improved from year to year in detail and quality, the plans, including the one for fiscal year 2009, did not fully satisfy the conditions set out by law. Further, in September 2008, GAO made recommendations to address SBInet technological capabilities that were ambiguous or in a state of flux. DHS generally concurred with them. In January 2010, GAO reported that the number of new system defects identified over an 17 month period while testing was underway was generally increasing faster than the number of defects being fixed, not indicative of a maturing system. Given the program's shortcomings, in January 2010, the Secretary of Homeland Security ordered an assessment of the program, and in March 2010, the Secretary froze a portion of the program's fiscal year 2010 funding. GAO plans to report in May 2010 on the SBInet solution and the status of its September 2008 recommendations. CBP has completed deploying most of its planned tactical infrastructure and has begun efforts to measure its impact on border security, in response to a GAO recommendation. As of April 2010, CBP had completed 646 of the 652 miles of fencing it committed to deploy along the southwest border. CBP plans to have the remaining 6 miles of this baseline completed by December 2010. CBP reported that tactical infrastructure, coupled with additional trained agents, had increased the miles of the southwest border under control, but despite a $2.6 billion investment, it cannot account separately for the impact of tactical infrastructure. In a September 2009 report, GAO recommended that to improve the quality of information available to allocate resources and determine tactical infrastructure's contribution to effective control of the border, the Commissioner of CBP conduct a cost-effective evaluation of the impact of tactical infrastructure. DHS concurred with our recommendation and, in April 2010, told GAO that the Homeland Security Institute had undertaken this analysis.
Grants constitute one form of federal assistance consisting of payments in cash or in kind to a state or local government or a nongovernmental recipient for a specified purpose.programs are extremely diverse. They can vary greatly in numerous ways including size, the nature of their recipients, and the type of programs they fund. Grant programs can also vary in two important dimensions— the amount of discretion they give to the recipient in how the funds will be used, and the way they are allocated or awarded. Typically, grants are grouped into three types based on the amount of discretion given to the recipient for the use of funds: categorical grants, block grants, and general purpose grants. Taken as a whole, federal grant Categorical grants are the most restricted, permitting funds to be used only for specific activities related to their purpose, such as funding for the narrowly defined purpose of nutrition for the elderly. Block grants are less restrictive, funding broader categories of activities, such as community development or public health, and generally give greater discretion to recipients in identifying problems and designing programs to address those problems. General purpose grants, such as revenue sharing, offer the greatest amount of discretion to the recipient, as they require only that the funds be spent for government purposes. However, grant categories with regard to categorical grants and block grants are not rigid and sometimes overlap occurs. Each of these grants strikes a different balance between the interests of the federal grant- making agency that funds be used efficiently and effectively to meet national objectives, and the interests of the recipient to use the funds to meet local priorities and to minimize the administrative burdens associated with accepting the grant. Grant programs also vary in the methods they use to allocate or award funds, that is, by formula or through discretionary project grants. Formula grants allocate funds based on distribution formulas prescribed by legislation or administrative regulation. Project grants are generally awarded on a competitive basis to eligible applicants for specific projects. OMB has emphasized the use of competitive grants as a means of increasing innovation in grant proposals. While these labels help classify grants based on prominent characteristics, they should not be understood to be mutually exclusive definitions, as more than one can apply to a given grant program. For example, the federal government distributes Community Development Block Grant funds to states using a formula, but states redistribute the funds to localities, sometimes as project grants. While there is substantial variation among grant types, competitive grants generally follow a life cycle that includes announcement, application, award, post-award, and closeout, as seen in figure 1. Once a grant program is established through legislation, which may specify particular objectives, eligibility, and other requirements, a grant-making agency may impose additional requirements on it. For competitive grant programs, the public is notified of the grant opportunity through an announcement, and potential recipients must submit applications for agency review. In the award stage, the agency identifies successful applicants or legislatively defined grant recipients and awards funding. The implementation stage includes payment processing, agency monitoring, and recipient reporting, which may include financial and performance information. The closeout phase includes preparation of final reports, financial reconciliation, and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. Federal agencies do not have inherent authority to enter into grant agreements without affirmative legislative authorization. In authorizing grant programs, federal laws identify the types of activities that can be funded and the purposes to be accomplished through the funding. Legislation establishing a grant program frequently will define the program objectives and leave the administering agency to fill in the details by regulation. Grant programs are typically subject to a wide range of accountability requirements under their authorizing legislation or appropriation and implementing regulations so that funding is spent for its intended purpose. For example, the Department of Housing and Urban Development (HUD) administers grants to aid states and localities in providing affordable housing for low-income families. Congress mandated that HUD administer these grant programs in a manner that furthers fair housing. HUD regulations direct grant recipients to prepare planning documents and maintain certain records proving the legislation’s fair housing requirements as a condition to receiving funds. Congress may also impose requirements on specific funding for grant programs. The American Recovery and Reinvestment Act of 2009 (Recovery Act) required increased reporting and oversight on both grant-making agencies and recipients for many different grant programs receiving additional funding under the Recovery Act. In addition, grant programs are also subject to cross-cutting requirements applicable to most assistance programs (see table 1 for more information). For example, recipients of grant funds are prohibited from using those funds to lobby members and employees of Congress and executive agency employees. OMB is responsible for developing government-wide policies to ensure that grants are managed properly and that grant funds are spent in accordance with applicable laws and regulations. For many decades, OMB has published guidance in various circulars to aid grant-making agencies with such subjects as audit and record keeping and the allowability of costs. For a detailed discussion of grants management legislation and OMB’s role in developing grants policy, see appendix III. Grants are an important tool used by the federal government to provide program funding to state and local governments. OMB has previously estimated that grants to state and local governments represent roughly 80 percent of all federal grant funding, with the remaining approximately 20 percent going to recipients such as nonprofit organizations, research institutions, or individuals. Federal outlays for grants to state and local governments totaled more than $606 billion in fiscal year 2011, equivalent to 4.1 percent of the gross domestic product (GDP) in that year. For comparison, federal outlays for national defense were 4.7 percent of GDP during the same period. With outlays of $275 billion in fiscal year 2011, Medicaid is by far the federal government’s largest single grant program and by itself accounted for 45 percent of federal grant outlays to state and local governments in that year. The Department of Health and Human Services (HHS), which administers the Medicaid program, is the largest grant-making agency, with grant outlays of almost $348 billion in fiscal year 2011, or about 57 percent of the total federal grant outlays to state and local governments. However, even when Medicaid is excluded, HHS remains the largest federal grant-making agency. While many federal agencies award grants, the large majority of grant outlays to state and local governments are made by just a few agencies, with the top five accounting for more than 90 percent of those grant outlays in fiscal year 2011. Following HHS, the next four agencies with the largest amount of grant outlays to state and local governments in fiscal year 2011 were the Departments of Education (Education), Transportation, HUD, and Agriculture. Figure 2 shows the amount of, and percentage of, grant outlays to state and local governments for the top 5 grant-making agencies. Federal outlays for grants to state and local governments increased from $91 billion in fiscal year 1980 (about $221 billion in 2011 constant dollars) to more than $606 billion in fiscal year 2011. Figure 3 shows the total federal outlays for grants to state and local governments over the period from fiscal years 1980 to 2011, in constant dollars, and the increasing amount of this total that went to Medicaid over time. While the past three decades have witnessed a dramatic growth in federal grants to state and local governments in absolute dollar terms, the same is not true when one considers these grant outlays as a proportion of total federal spending. As shown in figure 4, grant outlays to state and local governments as a percentage of total federal outlays in fiscal year 2011 were at a roughly comparable level to what they were more than 30 years earlier (16.8 percent versus 15.5 percent). However, during this period the proportion of federal grant outlays to state and local governments dedicated to Medicaid more than tripled, rising from 2.4 percent of all federal outlays in 1980 to 7.6 percent in 2011. The increase in outlays for Medicaid and other health-related grant programs was offset by an approximately equivalent decrease in the share of outlays for other grants to state and local governments. The dip in federal grant outlays to state and local governments as a percentage of total outlays during the 1980s, seen in figure 4, was likely due to a variety of factors, including efforts undertaken at the time to merge categorical grant programs in several functional areas into block grants and also reduce funding levels. For example, as part of the Omnibus Budget Reconciliation Act of 1981, nine block grants were created from about 50 of the 534 categorical programs in effect at that time. Overall, funding for the categorical grants bundled into these block grants was reduced 12 percent, about $1 billion, from their combined funding level the previous year. State officials believed that funding reductions would not result in the loss of services for recipients because the reductions would be offset by administrative efficiencies, although in our subsequent work we found that the administrative cost savings were difficult to quantify. Figure 4 also shows the upturn in federal grant outlays in 2009 and 2010 that were the result of the Recovery Act. Grant outlays can also be analyzed historically using OMB’s grant programs’ functional categories. In fiscal year 2011, the five largest grant program categories by government function were health; income security; education, training, employment, and social services; transportation; and community and regional development. Figure 5 shows federal grant outlays to state and local governments broken out by these five governmental functions, from fiscal year 1980 to fiscal year 2011. Health function grant outlays were 17 percent of total grant outlays to state and local governments in fiscal year 1980—lower than either income security or education. By fiscal year 2011, outlays for health- related grant functions increased to almost 50 percent of these total grant outlays. While outlays for health-related grants experienced a relatively steady increase in the last three decades—more than doubling in 30 years—outlays for other grant functions generally decreased relative to the total of all federal grants to state and local governments during the same period. OMB and others have noted that the relative growth and contraction of grant outlays for different purposes reflects a broader shift in the focus of federal outlays for grant programs. According to OMB data, since the 1980s, funding has shifted from providing grants to state and local governments for physical capital and societal activities (e.g., highways, mass transit, sewage treatment plants, public education, government administration and community development), toward grants for payments for the benefit of individuals or families. These grants benefitting individuals are primarily entitlement programs such as Medicaid, Temporary Assistance for Needy Families, child nutrition programs, and housing assistance. In fiscal year 1980, the percentage of grant outlays for the benefit of individuals and families was just under 36 percent. By fiscal year 2011, federal outlays for grants benefitting individuals and families, a major component of which is Medicaid, had grown to almost two-thirds (64 percent) of all grant outlays to state and local governments. There are various sources for data on the amount the federal government spends on grants, including OMB budget data, USAspending.gov, and Census Bureau surveys of state and local governments. See appendix II for more detail about these data sources and their differences. The various differences in each data source can create challenges for those examining federal grants management issues and for congressional oversight of grants administration. Our prior work and the work of others have shown that the number of federal grant programs to state and local governments has generally increased over the last three decades. However, determining a definitive number of federal grant programs presents certain difficulties. Efforts to accurately track the number of federal grant programs over time have been complicated by the fact that different entities have counted grant programs differently for decades. Both OMB and the former U.S. Advisory Commission on Intergovernmental Relations (ACIR) periodically published counts of the total number of federally funded grant programs during the 1980s and 1990s, but because they used different methodologies to determine which grant programs to include, they came up with different results. For example, in 1995 OMB identified 608 federally funded grant programs compared to ACIR’s count of 633.no longer issues formal counts of federally funded grant programs and there is no current consensus on the methodology used to count federal grant programs. As of the end of May 2012, the CFDA listed a total of 2,240 federal assistance programs, including 239 items under a search for formula grants and 1,530 items under a search for project grants. CFDA data are available on the Web at http://www.CFDA.gov. the number of federal grant programs using the information included in the CFDA database. Over time, growth of both the numbers of grant programs for state and local governments and their level of funding has created greater diversity and complexity in federal grants management. Substantial variation in the way federal agencies administer these programs has further increased their complexity. As a result, the management of grants to state and local governments presents both grant-making agencies and grant recipients with a variety of challenges. We and others have previously reported on many of these issues which can be grouped into the following broad themes: (1) challenges related to effectively measuring grant performance; (2) uncoordinated program creation; (3) need for better collaboration; (4) internal control weaknesses in grants management and oversight; and (5) lack of agency or recipient capacity. In our past work, we have reported that effective performance accountability provisions are of fundamental importance in assuring the proper and effective use of federal funds and determining if grant program goals are met. Two issues that we have previously identified as important for effectively reporting on grant performance are having appropriate, high-quality performance measures and accurate performance data. GAO, Agencies’ Annual Performance Plans under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking, GAO/GGD/AIMD-10.1.18 (Washington, D.C.: February 1998); and The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans, GAO/GGD-10.1.20 (Washington, D.C.: April 1998). measurable targets; and be objective, reliable, and balanced. However, we have found that while agencies may implement measures with some of these attributes, other key attributes may not be incorporated. For example, the Department of Justice (Justice) developed and implemented 86 new performance measures for the Edward Byrne Memorial Justice Assistance Grant (JAG) funds to state and local governments for criminal justice activities in 2009. While Justice continued to make efforts to improve these measures in 2009, we reported that 19 of the JAG performance measures we reviewed generally lacked, in varying degrees, several key attributes of successful performance measurement systems, including clarity, linkages with strategic or programmatic goals, objectivity, reliability, and the measurability of targets. Specifically, we found that 14 of the 19 measures were not clearly defined; 14 of the 19 measures were not linked to Justice’s strategic or programmatic goals; 13 of the 19 measures were not reliable; and 17 of the 19 measures did not have measurable targets. Our report noted that by more fully incorporating such attributes of effective performance measures into its performance measurement and reporting system, Justice could facilitate accountability, be better positioned to monitor and assess results, and subsequently improve its grants management. We recommended that Justice should, in revising its performance measures, consider incorporating key attributes of successful performance measurement systems, such as clarity, reliability, linkage, objectivity, and measurable targets. Justice concurred with the recommendations in our report and they have actions underway that address the recommendations. In another example of an agency’s publishing measures that do not necessarily contribute to its ability to assess grant program effectiveness, Department of Homeland Security (DHS) implemented some performance measures for the State Homeland Security Program (SHSP) and Urban Areas Security Initiative (UASI) in the fiscal year 2011 grant guidance. However, the type of measures DHS published in the SHSP and UASI guidance did not contribute to DHS’s ability to assess the effectiveness of these grant programs, but instead provided DHS with information to help it measure completion of tasks or activities. We recommended, among other things, that DHS revise its plan to ensure the timely implementation of performance measures to assess the effectiveness of these grants. According to DHS, it has efforts under way to develop additional measures to help it assess grant program effectiveness; however, until these measures are implemented, it will be difficult for DHS to determine the effectiveness of these grant-funded projects, which totaled $20.3 billion from fiscal years 2002 through 2011. As we have previously reported, performance measures that evaluate program results can help Congress make more informed policy decisions regarding program achievements and performance. Agencies could facilitate accountability, be better positioned to monitor and assess results, and subsequently improve their grants management by including key attributes of successful performance measurement systems in their performance measure revisions. Data Collection and Validation Challenges. Grant programs often rely on recipients’ administrative systems to provide performance information. Our prior work has shown that agencies relying on third parties for performance data can have difficulty collecting the data as well as ascertaining its accuracy and quality. In past work, we have also found that the availability and credibility of performance data has been a long- standing weakness. An example of this can be seen in our November 2011 report on federal “green building” initiatives that foster—in part through the use of grant funds—construction and maintenance practices designed to make efficient use of resources, reduce environmental problems, and provide long-term financial and health benefits in the nonfederal sector. Eleven agencies implemented 94 federal initiatives, 47 of which were funded by grants. Agency officials reported that they may not have had information on the results of green building initiatives for the nonfederal sector, in part because they faced several challenges in gathering appropriate and reliable performance data, such as utility usage data for multifamily properties. These difficulties included obtaining the resources necessary to develop systems for accurate data collection, a lack of industry standards for performance data collection, third party utility companies’ diverse policies governing data sharing, as well as the utility companies’ wide-ranging capacities to collect data. In particular, HUD officials told us the quality of utility data can vary by utility company, especially for water consumption data—which can be incomplete and inaccurate and is often not available in electronic form. In other instances, actual performance data may not be available until after the completion of the grant project. For example, Department of Energy (DOE) officials said that for the Energy Efficiency and Conservation Block Grant program (EECBG) actual energy savings data are generally available only after a project is completed; therefore, to comply with the program’s reporting requirements, most recipients reported estimates calculated using the Environmental Protection Agency’s Portfolio Manager tool. While DOE officials said they had anecdotal examples of program successes, DOE had experienced challenges in assessing the reasonableness of the energy-savings estimates provided by recipients because DOE did not require recipients to use the most up-to-date estimating tool when calculating and reporting energy-savings estimates. Consequently, DOE could not identify instances where recipients’ estimates may need to be more carefully We recommended, among other things, that DOE should reviewed. solicit information on recipients’ methods for estimating energy savings and verify that recipients use the most recent version of the estimating tool. To address our recommendation, DOE issued guidance effective June 23, 2011, that eliminates the requirement for grant recipients to calculate and report estimated energy savings. DOE officials said the calculation of estimated impact metrics will now be performed centrally by DOE by applying known national standards to existing grantee-reported performance metrics. Based on DOE’s action, we concluded that DOE has addressed the intent of this recommendation. Even in federal grants with designs that favor performance accountability, challenges related to collecting and reporting performance data can affect the extent to which performance accountability can be achieved. GAO-11-379. In our 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue, we reported on examples of how multiple federal grant programs, created without coordinated purposes and scopes, can result in structural grants management challenges. One example involved four DHS grant programs—the State Homeland Security Program, the Urban Areas Security Initiative, the Port Security Grant Program, and the Transit Security Grant Program. DHS’s Federal Emergency Management Agency (FEMA) allocated about $20.3 billion to recipients through the four programs from fiscal years 2002 through 2011. These four grant programs have similar goals and fund similar activities in overlapping jurisdictions. For instance, many jurisdictions within designated Urban Areas Security Initiative regions also apply for and receive State Homeland Security Program funding. Similarly, port stakeholders in urban areas could receive funding for equipment such as patrol boats through both the Port Security Grant Program and the Urban Areas Security Initiative, and a transit agency could purchase surveillance equipment with Transit Security Grant Program or Urban Areas Security Initiative funding. We, as well as DHS’s IG, concluded that FEMA should use more specific project-level data in making grant award decisions in order to identify and mitigate potential duplication. Our work, and the work of the DHS IG, concluded that coordinating the review of grant projects internally would give FEMA more complete information about applications across the four grant programs that could help FEMA identify and mitigate the risk of unnecessary duplication across grant applications. We recommended in February 2012, among other things, that FEMA take steps to ensure that it collects project information with the level of detail needed to better position the agency to identify any potential unnecessary duplication within and across the four grant programs. DHS concurred with our recommendation in this area. In another example of this challenge, we found instances where Justice could improve how it targets nearly $3.9 billion to reduce the risk of potentially unnecessary duplication across more than 11,000 grant awards it makes annually. Justice’s grant-making agencies had awarded funds from different grant programs to the same applicants whose applications described similar—and in some cases, the same—purposes for using the grant funds. While we acknowledged that there may be times when Justice’s decision to fund recipients in this manner is warranted, our work found that Justice made grant award decisions without visibility over whether the funds supported similar or the same purposes, thus potentially resulting in unintended, and unnecessary, duplication. We found that Justice had not assessed its grant programs to determine the extent to which they may overlap with one another and determine if consolidation of grant programs may be appropriate. Further, Justice’s grant-making agencies had not established consistent policies and procedures for sharing grant application information that could help them identify and mitigate unnecessary duplication in how recipients intend to use their grant awards. We recommended that Justice should conduct an assessment to better understand the extent to which the department’s grant programs overlap with one another and determine if grant programs may be consolidated to mitigate the risk of unnecessary duplication. To the extent that Justice identifies any statutory obstacles to consolidating its grant programs, it should work with Congress to address them, as needed. Justice concurred with our recommendations in this area. Addressing structural challenges such as these may achieve cost savings, enhance revenue, and could result in greater efficiencies in grant programs. The process of distributing federal assistance through grants is complicated and involves many different parties—both public and private—with different organizational structures, sizes, and missions. In previous work, we have identified lack of collaboration among and between federal agencies, state and local governments, and nongovernmental grant participants as a challenge to effective grant implementation. Because grants management can be complex, collaboration among the grant participants, particularly with regard to information sharing, is important. With this in mind, we have identified key practices to enhance and sustain collaboration among federal agencies. We have also recommended these same key practices to strengthen partnerships between government and nongovernmental entities, such as nonprofit organizations. In that same report, we describe an example related to hurricane recovery that involves difficulties in collaboration between federal agencies and state and local case management providers. Disaster case management is a process that assists people in identifying their service needs, locating and arranging services, and coordinating the services of multiple providers to help people recover from a disaster. State and local agencies providing federally funded disaster case management services faced challenges in, among other things, obtaining timely and accurate information from the federal agencies overseeing the disaster case management programs. While FEMA had a lead role in coordinating other types of disaster assistance after Hurricanes Katrina and Rita, its role for coordinating disaster case management was not explicit. Initial coordination activities among federal agencies and case management providers were minimal following the hurricanes. As a result, we found that some victims may not have received case management services while others may have received services from multiple providers. We recommended, among other things, that FEMA establish a time line for developing a disaster case management program that includes practices to enhance coordination among stakeholders involved in this program. FEMA agreed with our recommendations in this area and reported that it would take steps to coordinate with stakeholders. Among other actions, FEMA has since held a disaster case management summit, and participants made recommendations for improving coordination among federal and nonfederal stakeholders that will be included in the disaster case management program guidance. GAO, Transportation-Disadvantaged Populations: Federal Coordination Efforts Could Be Further Strengthened, GAO-12-647, (Washington, D.C.: June 20, 2012). were developing guidance and technical assistance for transportation coordination, other federal departments still had more work to do in identifying and assessing their transportation programs, working with other federal departments to identify opportunities for additional collaboration, and developing and disseminating policies and recipient guidance for coordinating transportation services. In June 2012, we reported that several state and local officials told us that there was not sufficient federal leadership and guidance on how to coordinate transportation services for the disadvantaged and that varying federal program requirements may hinder coordination of transportation services, acting as barriers to collaboration. In that report, we recommended that in order to promote and enhance federal, state, and local coordination activities, the Secretary of Transportation as the chair of the Coordinating Council, as well as the member agencies of the Coordinating Council, should complete and publish a strategic plan and report on their progress in implementing their recommendations. Education and Veterans Affairs generally agreed with our report, while HHS, HUD, and the Department of Transportation neither agreed nor disagreed. When awarding and managing federal grants, effective oversight and internal control is important to provide reasonable assurance to federal managers and taxpayers that grants are awarded properly, recipients are eligible, and federal grant funds are used as intended and in accordance with applicable laws and regulations. Internal control comprises the plans, methods, and procedures agencies use to be reasonably assured that their missions, goals, and objectives can be met. In numerous reviews over the years, we and agency IGs have identified weaknesses in agencies’ internal controls for managing and overseeing grants. When such controls are weak, federal grant-making agencies face challenges in achieving grant program goals and assuring the proper and effective use of federal funds which can help avoid improper payments. Control Weaknesses in Monitoring and Overseeing Grant Programs. Agencies are responsible for overseeing and monitoring implementation of their grant programs to help ensure that recipients are meeting program and accountability requirements. Oversight procedures for monitoring the recipients’ use of awarded funds, including site visits and review of recipient reports, can help agencies determine whether recipients are operating efficiently and effectively and whether program funds are being spent appropriately. Risk-based monitoring programs can help identify those areas in need of oversight resources. When agencies do not consider certain risk factors when selecting recipients for site visits, they may not know where to focus their monitoring resources. For example, in February 2011, the IG at the National Archives and Records Administration (NARA) reported that NARA, among other things, had not developed a risk-based process for monitoring and determining which grants to review. The IG found that NARA did not consider relevant factors, such as a program’s age or size, or the experience of the recipient. The IG concluded that without a more structured process for determining and assessing risk, NARA could not provide adequate assurance that risks associated with its grant programs are properly addressed and mitigated. According to the IG, NARA subsequently took corrective actions, including developing selection criteria for grantee site visits and desk audits and determining their frequency. Federal agencies award grant funds to recipients, often states and local governments, and then those grant recipients may award, or pass through, subgrants to subrecipients. identify, prioritize, and manage potential at-risk subrecipients to ensure that grant goals are reached and funds are used properly. In April 2011, we reported on DOE’s use of Recovery Act funds for the EECBG program. We found that EECBG recipients used various methods to monitor sub-recipients, with some recipients providing more rigorous monitoring than others. DOE officials acknowledged that many recipients are resource constrained, limiting their ability to monitor subrecipients and ensure compliance with applicable federal requirements. DOE gathered specific information on recipient monitoring practices during on-site visits. A subrecipient is an entity that receives a grant award from the prime recipient of an award and is accountable to the prime recipient for the use of the federal funds provided by the subaward. However, because not all recipients were to receive site visits, DOE did not have specific information on monitoring for many recipients, and therefore, did not know whether those monitoring activities were sufficiently rigorous to ensure compliance with federal requirements. We recommended that DOE explore a means to capture information on the monitoring processes of all recipients to make certain that recipients have effective monitoring practices. DOE has taken some actions to increase their monitoring efforts; however, the actions may not result in capturing information on the monitoring practices of all recipients. Medicaid, the largest federal grant program, has also been the subject of numerous reviews. The challenges faced by HHS’s Centers for Medicare & Medicaid Services (CMS) in overseeing fiscal management of the Medicaid program have been well-documented in our past work. Because of concerns about the program’s fiscal management, size, growth, and diversity, Medicaid has been on our list of high-risk programs. Areas of concern in the Medicaid program include improper payments and inconsistent reviews of managed care rate setting by CMS. Government-wide Issues. Our work has identified weaknesses in grant oversight and accountability issues that span the government, including challenges in oversight of undisbursed grant award balances and significant levels of improper payments in grant programs. We have found issues and raised concerns about timely grant closeouts, including undisbursed funds remaining in grant accounts, across the federal government. For grant programs with a defined end date, closeout procedures help ensure that grant recipients have met all financial requirements, provided final reports, and returned any unused funds. We have reported that some agencies lack adequate systems or policies to properly monitor grant closeouts or did not deobligate funds from grants eligible for closeout in a timely manner. When agencies do not conduct closeout procedures in a timely manner, unused funds can be prevented from being used to help address the purpose of the grant. Further, failure to close out a grant and deobligate any unspent balances can allow recipients to continue to draw down federal funds even after the grant’s period of availability to the recipient has ended, making these funds more susceptible to waste, fraud, or mismanagement. In April 2012, we reported that, as of September 30, 2011, more than $794 million remained in expired grant accounts in the Payment Management System, the largest civilian federal payment system which made 68 percent of all federal grant disbursements in fiscal year 2010. These accounts were more than 3 months past the grant end date and had no activity for 9 months or more, with some balances remaining in grant accounts several years past their expiration date. Subsequently, OMB issued guidance instructing federal agencies to take appropriate action to close out grants in a timely manner. Federal agencies reported an estimated $115.3 billion in improper payments in fiscal year 2011. Many of the programs reporting improper payments were federal grant programs, including Medicaid and the National School Lunch program. Strong preventive controls are important as they serve as the front-line defense against improper payments, and effective monitoring and reporting are important to help detect emerging improper payment issues. In March 2012, we reported that many agencies and programs are in the process of implementing preventive controls that involve activities such as training, which can be a key element in any effort to prevent improper payments from occurring. example, CMS’s Medicaid Integrity Group trains state-level staff and sponsors education programs for beneficiaries and providers. Along with strong preventive controls, effective detection techniques, such as data mining and recovery auditing to quickly identify and recover improper payments, are important for reducing improper payments. GAO, Improper Payments: Remaining Challenges and Strategies for Government-wide Reduction Efforts, GAO-12-573T (Washington, D.C.: Mar. 28, 2012). the U.S. Government. A deficiency in internal control exists when the design or operation of a control does not allow management or employees, in the normal course of performing their assigned functions, to prevent or detect and correct misstatements on a timely basis. We reported that these internal control deficiencies could adversely affect the federal government’s ability to ensure that grant funds are being spent in accordance with applicable program laws and regulations. We based our finding on audits of agencies’ fiscal year 2011 financial statements, where auditors at several federal agencies found grants management internal control deficiencies, primarily regarding inadequate monitoring and oversight of grant programs. For example, the auditor at HUD reported issues regarding timely action and follow-up with noncompliant recipients, as well as inadequate procedures to identify noncompliant recipients. We reported that these internal control deficiencies could adversely affect the federal government’s ability to ensure that grant funds are being spent in accordance with applicable program laws and regulations. The capacity of grant-making agencies and recipients is a key issue in grants management which can impact program success. Capacity involves both the maintenance of appropriate resources and the ability to effectively manage those resources. Building sufficient capacity is a challenge that may involve significant costs or tradeoffs. Three relevant types of capacity are organizational, human capital, and financial. Organizational capacity captures the degree to which the grant-making agency or recipient is institutionally prepared for grants management and implementation. This may include having appropriate leadership, management structure, and size to efficiently and effectively implement the program and adapt as needed. For example, we recently reported that capacity was a concern for states, school districts, and schools in the School Improvement Grant program.to develop the necessary staff capacity to successfully support and oversee the program implementation because of resource constraints. Officials from Education and several states told us that the grant required states to support local reform efforts to a much greater extent than they had in the past, and staff in some states had not yet developed the knowledge base to fulfill these responsibilities. Some states noted that such capacity limitations meant that time staff could devote to administering the program and monitoring district implementation was significantly limited. States and districts both struggled Human capital capacity measures the extent to which an organization has sufficient staff, knowledge, and technical skills to effectively meet its program goals. Human capital needs shift over time as programs change and face new challenges. Human capital needs also shift as new technology is implemented and the organization finds new ways to leverage expertise. Human capital challenges at the federal, state, and local level can underlie the operational difficulties faced during program implementation. For example, we have previously reported that during the initial phases of Gulf Coast rebuilding following the hurricanes in 2005, officials at both the federal and state level initially lacked the human capital capacity to administer the public assistance grant program. addition, local applicants initially lacked the staff to fully participate as partners in the program. Shortages of staff with the right skills and abilities, as well as the lack of continuity among rotating FEMA staff, contributed to delays in developing public assistance projects in Louisiana and Mississippi. GAO, Disaster Recovery: FEMA’s Public Assistance Grant Program Experienced Challenges with Gulf Coast Rebuilding, GAO-09-129 (Washington, D.C.: Dec. 18, 2008). reported that because many nonprofits view cuts in clients served or services offered as unpalatable, they have reported that they often compromise vital “back-office” functions, which over time can affect their ability to meet their missions. Further, nonprofits’ strained resources limit their ability to build a financial safety net, which can create a precarious financial situation for them. Absent a sufficient safety net, nonprofits that experience delays in receiving their federal funding may be inhibited in their ability to bridge funding gaps. When funding is delayed, some nonprofits have reported that they either borrow funds on a line of credit or use cash reserves to provide services and pay bills until their grant awards are received. Collectively, these issues place stress on the nonprofit sector, diminishing its ability to continue to effectively partner with the federal government to provide services to vulnerable populations. Since this report does not contain any new audit work that evaluates the policies or operations of any federal agency in this report, we did not seek agency comments. However, because of the role of OMB and GSA in producing or managing data on grant outlays and the number of grant programs, we shared drafts of relevant excerpts of this report with cognizant officials at these agencies and we made technical clarifications where appropriate. We are sending copies of this report to other interested congressional committees, the Acting Director of OMB, and the Acting Administrator of GSA. This report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact Stanley J. Czerwinski at (202) 512-6806 or czerwinskis@gao.gov, or Beryl H. Davis at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to describe (1) the amount of grant funding to state and local governments for fiscal year 2011, how grant funding to state and local governments has changed over the last three decades, and difficulties related to identifying the number of such grant programs; and (2) selected grant challenges involving federal grants to state and local governments that have been identified in our previous work and that of federal inspectors general (IG) over the last several years. In scoping the research objectives for this work we decided to limit our review to federal grants involving state and local governments because reliable historical data exist for this group of grants and, according to the Office of Management and Budget (OMB), such grants represent roughly 80 percent of all federal grant funding. We could not identify a similarly- reliable data source for the wider universe of all federal grants. To do this work we took the actions described below and we discussed various issues related to federal grants and data on grants funding and programs with officials at the OMB and the General Services Administration (GSA), as these agencies have government-wide responsibilities related to grants, grants management, and grants data. To determine key information regarding grant funding for fiscal year 2011, the growth in grant funding over the last three decades, and shifts in the focus of grant funding during that time, we used OMB data, specifically, OMB’s Historical Table 12.3, Total Outlays for Grants to State and Local Governments, by Function, Agency, and Program: 1940 – 2013 and Table 12.2, Total Outlays for Grants to State and Local Governments, by Function and Fund Group: 1940 – 2017. We extracted the data for fiscal years 1980 through 2011 and converted each fiscal year’s outlay amount to 2011 constant dollars which reflect adjustments for inflation. We sorted the data by agency and budget function (i.e., purpose of the spending) to identify the top five grant-making agencies and the top five functions for which grants were awarded. To determine grant outlays as a percentage of total outlays, we also used OMB’s Historical Table 6.1, Composition of Outlays: 1940-2017. As these are budget data that has undergone rigorous review by OMB, they are generally considered sufficiently reliable for most of our purposes. Therefore, we determined the data were sufficiently reliable for the purposes of this report. To describe issues related to identifying the number of grant programs, we reviewed our prior work and the work of others on federal grants and the Catalog of Federal Domestic Assistance (CFDA), the single authoritative, government-wide compendium and source for descriptions of federal programs that provide assistance or benefits to the American public. We reviewed research regarding methodologies used to count grant programs published by the Congressional Research Service (CRS). We discussed issues related to CFDA numbers and their relationship to the number of grant programs with GSA, the agency responsible for maintaining CFDA. We also inquired into issues regarding counting the number of grant programs with OMB. The on-line version, www.CFDA.gov, allows one to search for assistance programs using a number of search options, including the federal agency providing the assistance, program name, and assistance type. In addition to the OMB data described above, we identified other sources of data for information on the amounts of federal grants funding, including USASpending.gov and Census Bureau data. We analyzed and compared the different sources and describe how the data elements in each source differ. We discussed issues relating to USASpending.gov, including the reliability of the data, with GSA, the agency responsible for maintaining it. See appendix II for details about the data sources we identified and how they differ. To identify key issues and challenges related to the structure and operation of grants management, we reviewed previous relevant reports and audits by us, federal inspectors general (IG), and others. We searched GAO’s online database for grants management-related reports from 1995 to the present, and reviewed selected relevant reports. To identify more recent issues and challenges for the examples in this report, we reviewed selected GAO reports from 2006 to 2012. For IG reports, we searched websites for IGs of large and small grant-making agencies for reports related to grants management and financial statement audit reports where internal control weaknesses are identified. We determined whether the issues and challenges we identified still existed by reviewing our recommendation follow up work. For reports by others, we researched follow up work by the applicable IGs. We also searched various federal, public policy, and research organizations’ websites, including those for the CRS and OMB, to identify relevant reports and other literature regarding federal grant programs and how they are structured and managed. We shared drafts of the relevant sections of this report with cognizant officials at OMB and GSA. They generally agreed with the contents of this report and we incorporated their technical clarifications where appropriate. Various sources exist for data on the amount the federal government spends on grants, including Office of Management and Budget (OMB) budget data, USASpending.gov, and Census Bureau surveys of state and local governments. Each source was established and is used for slightly different purposes and contains different data elements. The various differences in each data source can create challenges for those examining federal grants management issues when trying to identify the scope of federal spending on grants. This appendix explains the purposes for and the differences in data contained in each source. OMB Budget Data. OMB collects data from federal agencies each year to prepare the President’s budget. OMB uses this data for a number of purposes related to the budget, including producing Historical Tables and Analytical Perspectives. One series of Historical Tables contains information on federal outlays for grants to state and local governments. According to OMB, the purpose of this series of Historical Tables is to identify federal government outlays that constitute income to state and local governments to help finance their services. Analytical Perspectives, according to OMB, is designed to highlight specific subject areas or provide other presentations of budget data that put the budget in context. USASpending.gov. In response to the Federal Funding Accountability and Transparency Act (FFATA), OMB established USASpending.gov in December 2007 to enhance the transparency of government expenditures. FFATA required that OMB establish a publicly available online database that would allow users to search for detailed information about entities that are awarded federal grants, loans, contracts, and other forms of financial assistance. The Congressional Research Service (CRS) reported that the premise of the law was that by making details of federal spending available to the public, government officials would be less likely to fund projects that might be perceived as wasteful. In addition, the new database required by the law would also help citizens better understand how the government distributes funds. For grant awards, federal agencies report the amount of obligations they incur and information on the recipients of those awards, starting in fiscal year 2007, in accordance with OMB guidance for agency data submissions.490,000 grant-related transactions were reported by federal agencies for fiscal year 2011. Census Bureau Surveys. The Census Bureau collects data from state and local governments, including data on grants provided by the federal government. This census of governments is one component of the nation’s economic census required by law, things, periodic and comprehensive statistics about governments and their financial activities. and provides, among other Consolidated Federal Funds Report. Prior to fiscal year 2011, the annual Consolidated Federal Funds Report (CFFR) was prepared by the Census Bureau from data submitted by federal agencies to the Federal Assistance Awards Data System (FAADS) and other selected agency data. With the enactment of FFATA, which required agencies to report data elements in addition to those that were captured by FAADS, and due to funding issues, the Census Bureau stopped publishing the CFFR after the fiscal year 2010 report. The information is now available for the public to review on USASpending.gov. Table 2 summarizes data elements included and not included in these data sources and provides more information about them. 13 U.S.C. § 161. Cooperative agreements are another form of financial assistance similar to grants, but where the federal agency is more involved with the recipient in implementing the program. An obligation is a definite commitment that creates a legal liability of the government for payment of goods and services ordered or received. An agency incurs an obligation, for example, when it awards a grant. OMB’s guidance for submitting data to USASpending.gov states that, under the Recovery Act, agencies are required to report all transactions, but can aggregate amounts under $25,000, and that agencies should begin to include aggregate information for all funding types. GSA officials told us agencies are not required by FFATA to include awards under $25,000 related to non-Recovery Act spending. Federal grants are typically subject to a wide range of substantive and other requirements under the particular program statutes as well as implementing agency regulations and other guidance that applies to them. They are also governed by many additional cross-cutting requirements that are common to most federal assistance programs. Figure 6 shows the relevant grant-related public laws that are discussed below. The Office of Management and Budget (OMB) has long been involved in grants management in the executive branch since its reorganization within the Executive Office of the President in 1970.published standards for establishing consistency and uniformity in the administration of grants and other types of financial assistance to state In 1971, OMB and local governments and certain Indian tribunals. However, even with the publication of OMB’s circular for grant administration, the Commission on Government Procurement studying federal spending practices in the early 1970s found that “federal grant-type activities are a vast and complex collection of assistance programs, functioning with little central guidance in a variety of ways that are often inconsistent even for similar programs and projects.” The Commission also found that because there were no statutory guidelines for executive agencies to distinguish between assistance relationships, such as grants and procurement relationships with nonfederal entities, agencies were inappropriately using grants to avoid competition and certain requirements that apply to the procurement system. Thereafter, Congress enacted the Federal Grant and Cooperative Agreement Act of 1977 to establish standards for executive agencies in selecting the most appropriate funding vehicle. The act directed OMB to provide guidance to executive agencies to promote consistent and efficient use of funding vehicles, and in 1978, OMB issued supplementary interpretive guidelines to help agencies distinguish between assistance programs and procurement relationships. In 1984, the Administration created the President’s Council on Management Improvement, assigning the Deputy Director of OMB as Chairman of the Council. While the Council’s role was to review overall management of government programs, several interagency task forces were created under the Council to review various aspects of grants management. Based on recommendations of one task force, the President issued Executive Order No. 12549 in 1986 requiring agencies to participate in a government-wide nonprocurement debarment and suspension system. Thereafter, OMB issued guidelines prescribing the program coverage, government-wide criteria, minimum due process procedures, and other guidance for the system. Another interagency task force explored streamlining the existing guidance for managing federal aid programs, and based on that review, in 1987, the President directed OMB to revise Circular No. A-102, “Grants and Cooperative Agreements with State and Local Governments” to specify uniform, government-wide terms and conditions for grants to state and local governments. The President further directed executive agencies to propose and issue common regulations adopting the terms and conditions set out by OMB verbatim, modified where necessary to reflect inconsistent statutory requirements. In Executive Order No. 12549 (51 Fed. Reg. 6370 (Feb. 21, 1986)), the President directed the establishment of a common system for debarment and suspension for assistance programs. common rules are largely identical regulations that are binding on their grantees. There were several grant-related laws enacted during the 1980s that focused on promoting accountability and transparency, and preventing abuse, within federal assistance programs. The Single Audit Act, as amended, provides uniform requirements for annual audits of nonfederal entities that expend more than $500,000 in federal awards annually. Prior to this act’s enactment, there were no uniform audit requirements for state and local government grantees, and these grantees were often subject to overlapping and conflicting audit requirements associated with each of the assistance programs in which they participated. Congress enacted other federal statutory provisions applicable to all recipients of federal funds, including the prohibition against lobbying with grant funds under the “Byrd Amendment” and the requirement to maintain a drug-free workplace as a precondition of receiving grant funding. Subsequent to the enactment of each of these acts, OMB issued guidance for agencies to implement the requirements of the acts. One of the key efforts to make government operations more efficient and effective and to prevent waste, fraud, abuse, and financial mismanagement came with the passage of the Chief Financial Officers The act builds off other legislative initiatives, such as the Act of 1990. Single Audit Act, to improve financial management practices in the federal government. The Chief Financial Officers Act created within OMB the Office of Federal Financial Management with specific statutory responsibility for financial management policy, including grants management, for the federal government.the lead role in financial management, no entity had been statutorily vested with the responsibility to coordinate financial management practices in the federal government. While OMB had long taken Along with the executive branch’s efforts to streamline and simplify grants management in the 1980s and 1990s, Congress enacted the Federal Financial Assistance Management Improvement Act of 1999, commonly known as “Public Law 106-107,” which required each federal grant- making agency to develop and implement a plan that simplifies the application, administration, and reporting procedures for financial assistance programs. coming up with a common application and reporting system. Following Public Law 106-107 and the President’s announcement of the E- government initiative in his 2002 Fiscal Year Management Agenda, OMB established Grants.gov as a central storehouse for information on thousands of grant. To further improve transparency and provide the public with information on federal spending, Congress enacted the Federal Funding Accountability and Transparency Act of 2006. The Act directed OMB to ensure the existence and operation of a single searchable website to be used by the public that shows the name of the entity receiving a federal award, the amount of the award, information on the award, and other information.2007 to fulfill the Act’s requirements. Pub. L. No. 106-107, 113 Stat. 1486 (Nov. 20, 1999). OMB consolidated its grants-related circulars as well as the agency common rules into Title 2 of the Code of Federal Regulations. Currently, OMB is in the process of re-issuing guidance for each of the common rules under Title 2, allowing federal grant-making agencies to simply adopt the regulations and thereby create a central point for all grantees to locate all grant government-wide requirements. Concurrent to the streamlining effort, OMB is also working with other stakeholders to evaluate potential reforms in federal grant policies. In an effort to reduce improper payments, OMB created the Single Audit Workgroup with federal and state members who studied a variety of options for improving the effectiveness of single audits. In February 2012, OMB published an advanced notice of proposed guidance detailing a series of reform ideas that would standardize information collection across agencies, adopt a risk-based model for single audits, and provide new administrative approaches for determining and monitoring the allocation of federal funds. The comment period closed at the end of March 2012; OMB has not yet issued proposed guidance based on comments received. Until the fall of 2011, OMB coordinated grants management policy through two federal boards: the Grants Policy Committee, which was established in 1999, and the Grants Executive Board, which was established in 2004. The Grants Executive Board oversaw the implementation work groups and the Grants.gov initiative while the Grants Policy Committee was composed of grants policy experts from across the federal government whose mission it was to simplify and streamline grant administration policies. In October 2011, OMB announced the creation of the Council on Financial Assistance Reform (COFAR) which replaced these two federal grant bodies. The COFAR is charged with identifying emerging issues, challenges, and opportunities in grants management and policy and providing recommendations to OMB on policies and actions to improve grants administration. According to OMB officials, the COFAR is also expected to serve as a clearinghouse of information on innovations and best practices in grants management. In contrast to the Grants Policy Committee and the Grants Executive Board, which together included members from 26 agencies, the COFAR is made up of the OMB Controller, representatives from the largest eight grant-making agencies, and a representative from one of the smaller federal grant-making agencies. The latter serves a rotating two-year term. Also unlike the Grants Policy Committee, which largely consisted of program level grants staff, the membership of the COFAR is at a higher level, being made up of the Chief Financial Officers of participating agencies. OMB officials told us that the COFAR is now working toward identifying priorities in grants management that may include various initiatives that were started by the defunct Grants Policy Committee and Grants Executive Board. Details on these have yet to be decided. In addition to the individuals named above, Peter Del Toro, Assistant Director, Kimberly A. McGatlin, Assistant Director, Laura M. Bednar, Maria C. Belaval, Anthony M. Bova, Amy R. Bowser, Melissa L. King, and Diane N. Morris were the major contributors to this report. Additionally, Virginia A. Chanley, Jason Kelly, and Robert Robinson also made key contributions. Green Building: Federal Initiatives for the Nonfederal Sector Could Benefit from More Interagency Collaboration. GAO-12-79. Washington, D.C.: November 2, 2011. Recovery Act: Energy Efficiency and Conservation Block Grant Recipients Face Challenges Meeting Legislative and Program Goals and Requirements. GAO-11-379. Washington, D.C.: April 7, 2011. School Improvement Grants: Education Should Take Additional Steps to Enhance Accountability for Schools and Contractors. GAO-12-373. Washington, D.C.: April 11, 2012. School Improvement Grants: Early Implementation Under Way, but Reforms Affected by Short Time Frames. GAO-11-741. Washington, D.C.: July 25, 2011. Recovery Act: Head Start Grantees Expand Services, but More Consistent Communication Could Improve Accountability and Decisions about Spending. GAO-11-166. Washington, D.C.: December 15, 2010. District of Columbia Public Education: Agencies Have Enhanced Internal Controls Over Federal Payments for School Improvement, But More Consistent Monitoring Needed. GAO-11-16. Washington, D.C.: November 18, 2010. University Research: Policies for the Reimbursement of Indirect Costs Need to Be Updated. GAO-10-937. Washington, D.C.: September 8, 2010. Grant Monitoring: Department of Education Could Improve Its Processes with Greater Focus on Assessing Risks, Acquiring Financial Skills, and Sharing Information. GAO-10-57. Washington, D.C.: November 19, 2009. Discretionary Grants: Further Tightening of Education’s Procedures for Making Awards Could Improve Transparency and Accountability. GAO-06-268. Washington, D.C.: February 21, 2006. Medicaid: Federal Oversight of Payments and Program Integrity Needs Improvement. GAO-12-674T. Washington, D.C.: April 25, 2012. National Institutes of Health: Awarding Process, Awarding Criteria, and Characteristics of Extramural Grants Made with Recovery Act Funding. GAO-10-848. Washington, D.C.: August 6, 2010. National Institutes of Health: Completion of Comprehensive Risk Management Program Essential to Effective Oversight. GAO-09-687. Washington, D.C.: September 11, 2009. Justice Grant Programs: DOJ Should Do More to Reduce the Risk of Unnecessary Duplication and Enhance Program Assessment. GAO-12-517. Washington, D.C.: July 12, 2012. Managing Preparedness Grants and Assessing National Capabilities: Continuing Challenges Impede FEMA’s Progress. GAO-12-526T. Washington, D.C.: March 20, 2012. Homeland Security: DHS Needs Better Project Information and Coordination among Four Overlapping Grant Programs. GAO-12-303. Washington, D.C.: February 28, 2012. Recovery Act: Department of Justice Could Better Assess Justice Assistance Grant Program Impact. GAO-11-87. Washington, D.C.: October 15, 2010. Hurricane Recovery: Federal Government Provided a Range of Assistance to Nonprofits following Hurricanes Katrina and Rita. GAO-10-800. Washington, D.C.: July 30, 2010. Disaster Recovery: FEMA’s Long-term Assistance Was Helpful to State and Local Governments but Had Some Limitations. GAO-10-404. Washington, D.C.: March 30, 2010. Disaster Assistance: Greater Coordination and an Evaluation of Programs’ Outcomes Could Improve Disaster Case Management. GAO-09-561. Washington, D.C.: July 8, 2009. Disaster Recovery: FEMA’s Public Assistance Grant Program Experienced Challenges with Gulf Coast Rebuilding. GAO-09-129. Washington, D.C.: December 18, 2008. Transportation-Disadvantaged Populations: Federal Coordination Efforts Could Be Further Strengthened. GAO-12-647. Washington, D.C.: June 20, 2012. Surface Transportation: Competitive Grant Programs Could Benefit from Increased Performance Focus and Better Documentation of Key Decisions. GAO-11-234. Washington, D.C.: March 30, 2011. Intercity Passenger Rail: Recording Clearer Reasons for Awards Decisions Would Improve Otherwise Good Grant-making Practices. GAO-11-283. Washington, D.C.: March 10, 2011. Metropolitan Planning Organizations: Options Exist to Enhance Transportation Planning Capacity and Federal Oversight. GAO-09-868. Washington, D.C.: September 9, 2009. Transportation-Disadvantaged Populations: Some Coordination Efforts Among Programs Providing Transportation Services, but Obstacles Persist. GAO-03-697. Washington, D.C.: June 30, 2003. Grants Management: Action Needed to Improve the Timeliness of Grant Closeouts by Federal Agencies. GAO-12-360. Washington, D.C.: April 16, 2012. Improper Payments: Remaining Challenges and Strategies for Government-wide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Federal Grants: Improvements Needed in Oversight and Accountability Processes. GAO-11-773T. Washington, D.C.: June 23, 2011. Grants.gov: Additional Action Needed to Address Persistent Governance and Funding Challenges. GAO-11-478. Washington, D.C.: May 6, 2011. Government Performance: GPRA Modernization Act Provides Opportunities to Help Address Fiscal, Performance, and Management Challenges. GAO-11-466T. Washington, D.C.: March 16, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Recovery Act: Further Opportunities Exist to Strengthen Oversight of Broadband Stimulus Programs. GAO-10-823. Washington, D.C.: August 4, 2010. State and Local Governments: Fiscal Pressures Could Have Implications for Future Delivery of Intergovernmental Programs. GAO-10-899. Washington, D.C.: July 30, 2010. Legal Services Corporation: Improvements Needed in Controls over Grant Awards and Grantee Effectiveness. GAO-10-540. Washington, D.C.: June 11, 2010. Nonprofit Sector: Treatment and Reimbursement of Indirect Costs Vary among Grants, and Depend Significantly on Federal, State, and Local Government Practices. GAO-10-477. Washington, D.C.: May 18, 2010. Streamlining Government: Opportunities Exist to Strengthen OMB’s Approach to Improving Efficiency. GAO-10-394. Washington, D.C.: May 7, 2010. Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006. GAO-10-365. Washington, D.C.: March 12, 2010. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability. GAO-10-231. Washington, D.C.: December 10, 2009. Grants Management: Grants.gov Has Systemic Weaknesses That Require Attention. GAO-09-589. Washington, D.C.: July 15, 2009. Recovery Act: Consistent Policies Needed to Ensure Equal Consideration of Grant Applications. GAO-09-590R. Washington, D.C.: April 29, 2009. Single Audit: Opportunities Exist to Improve the Single Audit Process and Oversight. GAO-09-307R. Washington, D.C.: March 13, 2009. Nonprofit Sector: Significant Federal Funds Reach the Sector through Various Mechanisms, but More Complete and Reliable Funding Data Are Needed. GAO-09-193. Washington, D.C.: February 26, 2009. Grants Management: Attention Needed to Address Undisbursed Balances in Expired Grant Accounts. GAO-08-432. Washington, D.C.: August 29, 2008. Grants Management: Enhancing Performance Accountability Provisions Could Lead to Better Results. GAO-06-1046. Washington, D.C.: September 29, 2006. Grants Management: Grantees’ Concerns with Efforts to Streamline and Simplify Processes. GAO-06-566. Washington, D.C.: July 28, 2006. Principles of Federal Appropriations Law: Third Edition, Volume II. GAO-06-382SP. Washington, D.C.: February 2006. Grants Management: Additional Actions Needed to Streamline and Simplify Processes. GAO-05-335. Washington, D.C.: April 18, 2005. Federal Assistance: Grant System Continues to Be Highly Fragmented. GAO-03-718T. Washington, D.C.: April 29, 2003. Grant Programs: Design Features Shape Flexibility, Accountability, and Performance Information. GAO/GGD-98-137. Washington, D.C.: June 22, 1998. Federal Grants: Design Improvement Could Help Federal Resources Go Further. GAO/AIMD-97-7. Washington, D.C.: December 18, 1996. Block Grants: Issues in Designing Accountability Provisions. GAO/AIMD-95-226. Washington, D.C.: September 1, 1995. Block Grants: Characteristics, Experience, and Lessons Learned. GAO/HEHS-95-74. Washington, D.C.: February 9, 1995.
Grants are a form of federal assistance consisting of payments in cash or in kind for a specified purpose and they represent an important tool for achieving national objectives. They vary greatly, including in the types of programs they fund, the methods they use to allocate funds to recipients, and the amount of discretion they give to the grant recipient on how the funds are spent. The Office of Management and Budget (OMB) has previously estimated that grants to state and local governments represent roughly 80 percent of all federal grant funding, with the balance going to recipients such as nonprofit organizations, research institutions, or individuals. In a time of fiscal constraint, continuing to support the current scope and breadth of federal grants to state and local governments will be a challenge. In response to a request, this report (1) provides information regarding the amount of grant funding to state and local governments for fiscal year 2011, how such funding has changed over the last three decades, and difficulties related to identifying the exact number of such grant programs; and (2) identifies selected grants management challenges that have been identified in previous work by GAO and federal IGs over the last several years. Towards this end, GAO analyzed data from OMB and the Catalog of Federal Domestic Assistance and conducted a review of previous reports from GAO and federal IGs. Federal outlays for grants to state and local governments totaled more than $606 billion in fiscal year 2011. Over the last three decades, these grants have consistently been a significant component of federal spending, but the focus of this spending has changed over time. For example, during this period the proportion of federal outlays to state and local governments dedicated to Medicaid grants more than tripled, rising from 2.4 percent of total federal government outlays in 1980 to 7.6 percent in 2011. The increase in federal outlays for Medicaid and other health-related grant programs was offset by an approximately equivalent decrease in grants to state and local governments targeted for other areas such as transportation, education, and regional development. GAO’s prior work and the work of others have also shown that the number of federal grant programs directed to state and local governments has generally increased over the last three decades. However, definitively determining the number of such grant programs presents difficulties. The lack of consensus on a methodology for how to define and count grant programs and data limitations in the Catalog of Federal Domestic Assistance further complicates this effort. GAO and federal inspectors general (IG) have previously reported on a variety of management challenges involving federal grants to state and local governments, many of which can be grouped into the following five topic areas: Challenges related to effectively measuring grant performance: A lack of appropriate performance measures and accurate data can limit agencies’ ability to effectively measure grant program performance. This can affect the ability of federal agencies to ensure that grant funds are effectively spent. Uncoordinated grant program creation: Numerous federal grant programs have been created over time without coordinated purposes or scope. This can result in grants management challenges such as unnecessary duplication across grant programs and unnecessary overlap in funding. Need for better collaboration: A lack of collaboration among grant program participants can impede effective grant implementation in areas such as knowledge sharing and defining clear leadership roles. Internal control weaknesses: When internal controls in grants management and oversight are weak, federal grant-making agencies face challenges in achieving program goals and assuring the proper and effective use of federal funds. Effective controls can help to avoid improper grant payments. Lack of agency or recipient capacity: Capacity reflects the organizational, financial, and human capital resources available for the implementation of grant programs. A lack of capacity can adversely impact an agency’s or recipient’s ability to manage and implement grant programs. GAO is not making any recommendations in this report.
As reliance on our nation’s critical infrastructures grows, so do the potential threats and attacks that could disrupt critical systems and operations. In response to the potential consequences, federal awareness of the importance of securing our nation’s critical infrastructures, which underpin our society, economy, and national security, has been evolving since the mid-1990s. For example, Presidential Decision Directive 63 (PDD 63), issued in 1998, described the federal government’s strategy for cooperative efforts with state and local governments and the private sector to protect the systems that are essential to the minimum operations of the economy and the government from physical and cyber attack. In 2002, the Homeland Security Act created the Department of Homeland Security, which was given responsibility for developing a national plan; recommending measures to protect the critical infrastructure; and collecting, analyzing, and disseminating information to government and private-sector entities to deter, prevent and respond to terrorist attacks. More recently, HSPD-7, issued in December 2003, defined federal responsibilities for critical infrastructure protection, superseding PDD 63. Federal awareness of the importance of securing our nation’s critical infrastructures has continued to evolve since the mid-1990s. Over the years, a variety of working groups has been formed, special reports written, federal policies issued, and organizations created to address the issues that have been raised. Key documents that have shaped the development of the federal government’s CIP policy include: Presidential Decision Directive 63 (PDD 63), The Homeland Security Act of 2002, The National Strategies for Homeland Security, to Secure Cyberspace and for the Physical Protection of Critical Infrastructures and Key Assets, and Homeland Security Presidential Directives 7 (HSPD-7) and 9 (HSPD-9). Presidential Decision Directive 63 Established an Initial CIP Strategy In 1998, the President issued PDD 63, which described a strategy for cooperative efforts by government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. PDD 63 called for a range of actions that were intended to improve federal agency security programs, improve the nation’s ability to detect and respond to serious computer-based and physical attacks, and establish a partnership between the government and the private sector. Although it was superseded in December 2003 by HSPD-7, PDD 63 provided the foundation for the development of the current sector-based CIP approach. To accomplish its goals, PDD 63 established and designated organizations to provide central coordination and support, including the National Infrastructure Protection Center (NIPC), an organization within the FBI, which was expanded to address national-level threat assessment, warning, vulnerability, and law enforcement investigation and response. To ensure the coverage of critical sectors, PDD 63 identified eight infrastructures and five functions. For each of the infrastructures and functions, the directive designated lead federal agencies, referred to as sector liaisons, to work with their counterparts in the private sector, referred to as sector coordinators. Among other responsibilities, PDD 63 stated that sector liaisons should identify and access economic incentives to encourage sector information sharing and other desired behavior. To facilitate private-sector participation, PDD 63 also encouraged the voluntary creation of information sharing and analysis centers (ISACs) to serve as mechanisms for gathering, analyzing, and appropriately sanitizing and disseminating information to and from infrastructure sectors and the federal government through NIPC. PDD 63 also suggested several key ISAC activities to effectively gather, analyze, and disseminate information—activities that could improve the security postures of the individual sectors and provide an improved level of communication within and across sectors and all levels of government. These activities are: establishing baseline statistics and patterns on the various infrastructures; serving as a clearinghouse for information within and among the various sectors; providing a library of historical data for use by the private sector and government, and reporting private-sector incidents to NIPC. The Homeland Security Act of 2002, signed by the President on November 25, 2002, established DHS. To help accomplish its mission, the act established five undersecretaries, among other entities, with responsibility over directorates for management, science and technology, information analysis and infrastructure protection, border and transportation security, and emergency preparedness and response. The act made the Information Analysis and Infrastructure Protection (IAIP) Directorate within the department responsible for CIP functions and transferred to it the functions, personnel, assets, and liabilities of several existing organizations with CIP responsibilities, including NIPC (other than the Computer Investigations and Operations Section). IAIP is responsible for accessing, receiving, and analyzing law enforcement information, intelligence information, and other threat and incident information from respective agencies of federal, state, and local governments and the private sector, and for combining and analyzing such information to identify and assess the nature and scope of terrorist threats. IAIP is also tasked with coordinating with other federal agencies to administer the Homeland Security Advisory System to provide specific warning information along with advice on appropriate protective measures and countermeasures. Further, IAIP is responsible for disseminating, as appropriate, information analyzed by DHS, within the department, to other federal agencies, state and local government agencies, and private-sector entities. Moreover, as stated in the Homeland Security Act of 2002, IAIP is responsible for (1) developing a comprehensive national plan for securing the key resources and critical infrastructure of the United States and (2) recommending measures to protect the key resources and critical infrastructure of the United States in coordination with other federal agencies and in cooperation with state and local government agencies and authorities, the private sector, and other entities. The National Strategy for Homeland Security identifies information sharing and systems as one foundation for evaluating homeland security investments across the federal government. It also identifies initiatives to enable critical infrastructure information sharing and to integrate sharing across state and local government, private industry, and citizens. Consistent with the original intent of PDD 63, the National Strategy for Homeland Security states that, in many cases, sufficient incentives exist in the private market for addressing the problems of CIP. However, the strategy also discusses the need to use all available policy tools to protect the health, safety, or well-being of the American people. It mentions federal grant programs to assist state and local efforts, legislation to create incentives for the private sector, and, in some cases, regulation. The National Strategy to Secure Cyberspace provides an initial framework for both organizing and prioritizing efforts to protect our nation’s cyberspace. It also provides direction to federal departments and agencies that have roles in cyberspace security and identifies steps that state and local governments, private companies and organizations, and individual Americans can take to improve our collective cybersecurity. The strategy warns that the nation’s private-sector networks are increasingly targeted and will likely be the first organizations to detect attacks with potential national significance. According to the cyberspace strategy, ISACs, which possess unique operational insight into their industries’ core functions and will help provide the necessary analysis to support national efforts, are expected to play an increasingly important role in the National Cyberspace Security Response System and the overall missions of homeland security. In addition, the cyberspace strategy identifies DHS as the central coordinator for cyberspace efforts and requires it to work closely with the ISACs to ensure that they receive timely and threat and vulnerability data that can be acted on and to coordinate voluntary contingency planning efforts. The strategy reemphasizes that the federal government encourages the private sector to continue to establish ISACs and, further, to enhance the analytical capabilities of existing ISACs. Moreover, the strategy stresses the need to improve and enhance public/private information sharing about cyber attacks, threats, and vulnerabilities and to encourage broader information sharing on cybersecurity among nongovernmental organizations with significant computing resources. The National Strategy to Secure Cyberspace also states that the market is to provide the major impetus to improve cybersecurity and that regulation will not become a primary means of securing cyberspace. The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets provides a statement of national policy to remain committed to protecting critical infrastructures and key assets from physical attacks. It outlines three key objectives to focus the national protection effort: (1) identifying and assuring the protection of the most critical assets, systems, and functions; (2) assuring the protection of infrastructures that face an imminent threat; and (3) pursuing collaborative measures and initiatives to assure the protection of other potential targets. The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets also states that further government leadership and intense collaboration between public- and private-sector stakeholders is needed to create a more effective and efficient information-sharing process to enable our core protective missions. Some of the specific initiatives include defining protection-related information requirements and establishing effective, efficient information-sharing processes; promoting the development and operation of critical sector ISACs, including developing advanced analytical capabilities; improving processes for domestic threat data collection, analysis, and dissemination to state and local governments and private industry; and completing implementation of the Homeland Security Advisory System. The National Strategy for the Protection of Critical Infrastructures and Key Assets reiterates that additional regulatory directives and mandates should be necessary only in instances where the market forces are insufficient to prompt the necessary investments to protect critical infrastructures and key assets. In December 2003, the President issued HSPD-7, which established a national policy for federal departments and agencies to identify and prioritize critical infrastructure and key resources and to protect them from terrorist attack. It superseded PDD 63. HSPD-7 defines responsibilities for DHS, sector-specific agencies (formerly referred to as lead agencies) that are responsible for addressing specific critical infrastructure sectors, and other departments and agencies. It instructs federal departments and agencies to identify, prioritize, and coordinate the protection of critical infrastructure to prevent, deter, and mitigate the effects of attacks. The Secretary of Homeland Security is assigned several responsibilities, including coordinating the national effort to enhance critical infrastructure identifying, prioritizing, and coordinating the protection of critical infrastructure, emphasizing protection against catastrophic health effects or mass casualties; establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors; and serving as the focal point for cyberspace security activities, including analysis, warning, information sharing, vulnerability reduction, mitigation, and recovery efforts for critical infrastructure information systems. To ensure the coverage of critical sectors, HSPD-7 designated sector- specific agencies for the critical infrastructure sectors identified in the National Strategy for Homeland Security (see table 1). These agencies are responsible for infrastructure protection activities in their assigned sectors, which include coordinating and collaborating with relevant federal agencies, state and local governments, and the private sector to carry out their responsibilities; conducting or facilitating vulnerability assessments of the sector; encouraging the use of risk management strategies to protect against and mitigate the effects of attacks against the critical infrastructure; identifying, prioritizing, and coordinating the protection of critical infrastructure; facilitating the sharing of information about physical and cyber threats, vulnerabilities, incidents, potential protective measures, and best practices; and reporting to DHS on an annual basis on their activities to meet these responsibilities. Further, the sector-specific agencies are to continue to encourage the development of information-sharing and analysis mechanisms and to support sector-coordinating mechanisms. HSPD-7 does not suggest any specific ISAC activities. In January, the President issued HSPD-9, which established a national policy to defend the agriculture and food system against terrorist attacks, major disasters, and other emergencies. HSPD-9 defines responsibilities for DHS, lead federal agencies, or sector-specific agencies, responsible for addressing specific critical infrastructure sectors, and other departments and agencies. It instructs federal departments and agencies to protect the agriculture and food system from terrorist attacks, major disasters, and other emergencies by identifying and prioritizing sector-critical infrastructure and key resources for establishing protection requirements, developing awareness and early warning capabilities to recognize threats, mitigating vulnerabilities at critical production and processing nodes, enhancing screening procedures for domestic and imported products, and enhancing response and recovery procedures. In addition, the Secretary of Homeland Security, in coordination with the Secretaries of Agriculture, Health and Human Services, and other appropriate federal department and agencies, are assigned responsibilities including expanding and continuing vulnerability assessments of the agriculture and working with appropriate private-sector entities to establish an effective information-sharing and analysis mechanism for agriculture and food. We have made numerous recommendations over the last several years related to information-sharing functions that have been transferred to DHS. One significant area of our work concerns the federal government’s CIP efforts, which is focused on sharing information on incidents, threats, and vulnerabilities and providing warnings related to critical infrastructures both within the federal government and between the federal government and state and local governments and the private sector. Although improvements have been made in protecting our nation’s critical infrastructures and continuing efforts are in progress, further efforts are needed to address the following critical CIP challenges that we have identified: developing a comprehensive and coordinated national plan to facilitate CIP information sharing that clearly delineates the roles and responsibilities of federal and nonfederal CIP entities, defines interim objectives and milestones, sets timeframes for achieving objectives, and establishes performance measures; developing fully productive information-sharing relationships within the federal government and among the federal government and state and local governments and the private sector; improving the federal government’s capabilities to analyze incident, threat, and vulnerability information obtained from numerous sources and share appropriate timely, useful warnings and other information concerning both cyber and physical threats to federal entities, state and local governments, and the private sector; and providing appropriate incentives for nonfederal entities to increase information sharing with the federal government. PDD 63 encouraged the voluntary creation of ISACs and suggested some possible activities, as discussed earlier; however, their actual design and functions were left to the private sector, along with their relationships with the federal government. HSPD-7 continues to encourage the development of information-sharing mechanisms and does not suggest specific ISAC activities. As a result, the ISACs have been designed to perform their missions based on the unique characteristics and needs of their individual sectors and, although their overall missions are similar, they have different characteristics. They were created to provide an information sharing and analysis capability for members of their respective infrastructure sectors in order to support efforts to mitigate risk and provide effective response to adverse events, including cyber, physical, and natural events. In addition, the ISACs have taken several steps to improve their capabilities and the services they provide to their respective sectors. The ISACs have developed diverse management structures and operations to meet the requirements of their respective critical infrastructure sectors. To fulfill their missions, they have been established using various business models, diverse funding mechanisms, and multiple communication methods. Business model—ISACs use different business models to accomplish their missions. Most are managed or operated as private entities, including the Financial Services, Chemical, Electricity Sector, Food, Information Technology, Public Transit, Real Estate, Surface Transportation, Highway, and Water ISACs. Many are established as part of an association that represents a segment of or an entire critical infrastructure sector. For example, the Association of Metropolitan Water Authorities manages the contract for the Water ISAC and the American Chemistry Council manages and operates the Chemical ISAC through its CHEMTRAC. In addition, the North American Electric Reliability Council (NERC), a nonprofit corporation that promotes electric system reliability and security, operates the Electricity Sector ISAC using internal expertise. The legal structure of the ISACs continues to evolve. The Financial Services ISAC has evolved from a limited liability corporation in 1999 to a 501(c)6 non-stock corporation and is managed by a board of directors that is comprised of representatives from the Financial Services ISAC’s members. According to the Financial Services ISAC Board, the change to be a 501(c)6 non-stock corporation, as mentioned above, was made to simplify the membership agreement and to make the process for obtaining public funding easier. The Energy ISAC also changed from a limited liability corporation to a 501(c)3 nonprofit charitable organization to eliminate membership barriers. Also, government agencies have partnered with the private sector to operate certain ISACs. For example, DHS’s National Communications Systems/ National Coordinating Center (NCC) for Telecommunications sponsors the Telecommunications ISAC, which is a government/industry operational and collaborative body. DHS provides for the Telecommunications ISAC facilities, tools and systems, the NCC manager, and the 24x7 watch operations staff. The private sector provides representatives who have access to key corporate personnel and other resources. In addition, DHS’s United States Fire Administration operates the Emergency Management and Response ISAC. New York State, through its Office of Cyber Security and Critical Infrastructure Coordination, is coordinating efforts of the Multi-state ISAC. The New York State Office of Cyber Security and Critical Infrastructure Coordination is currently studying best practices and lessons learned to assist in developing a structure that will include representation by member states. Six of the ISACs included in our study use contractors to perform their day-to-day operations. According to an Association of Metropolitan Water Agencies (AMWA) official, they chose a contractor to operate the Water ISAC because the contractor had the appropriate expertise. In addition, the contractor’s personnel had government clearances and the ability to operate a secure communication system and facility. In addition, ISACs use contractors to supplement their operations. For example, a formal contract provides for the daily staffing and performance of the Emergency Management and Response ISAC’s tasks. It chose this model because of federal requirements and the shortage of positions for federal full-time employees at the United States Fire Administration. The Telecommunications ISAC contracted for analysts to operate its 24 x 7 watch operations under the management of a government official. ISACs also differ in the nature of the hazards that they consider: cyber, physical, or all hazards (including natural events such as hurricanes). For example, during events of the power outage in August 2003 and Hurricane Isabel in September 2003, the Financial Services ISAC was contacted by DHS to determine the Banking and Finance sector’s preparedness and the impact of those events. However, the Multi-state ISAC will remain focused on cyber threats because other state organizations are in place to address physical and natural disaster events. Funding—ISACs fund their activities using a variety of methods—fees- for-service, association sponsorship, federal grants, and voluntary, or in- kind operations by existing participants. For example, the Financial Services, Information Technology, and Water ISACs use a tiered fee-for- service model for members. This model establishes different tiers of membership based on the level of service provided. These tiers typically include some basic level of service that is provided at minimal or no cost to the member and additional tiers that provide—for a fee—more personalized service and access to additional resources. To help ensure that cost is not a deterrent to membership and that the ISAC’s coverage of its sector is extensive, the Financial Services ISAC recently, as part of its next-generation ISAC effort, shifted to a tiered fee-for-service approach. It offers five levels of service that vary in cost—Basic (no charge), Core ($750 per year), Premier ($10,000 per year), Gold ($25,000 per year), and Platinum ($50,000)—for ascending levels of information and analytical capabilities. In addition, there is a partner-level license agreement for select industry associations ($10,000) for distribution to eligible association members of Urgent and Crisis Alerts. For example, the Information Technology ISAC recently started to work on a tiered basis with fees set annually at $40,000; $25,000; $5,000; $1,000; and free. The Water ISAC also uses a tiered approach, with membership fees ranging from $7,500 to $750 annually. The Surface Transportation ISAC assesses an annual fee from its Class I railroad members of approximately $7,500. Some industry associations that operate ISACs fund them from budgets. For example, the North American Electric Reliability Council (NERC) funds the Electricity Sector ISAC, and the American Trucking Association funds the Highway ISAC from their budgets. The American Chemistry Council fully funds the Chemical ISAC through the previously existing Chemical Transportation Emergency Center, known as CHEMTRAC. The ten trade associations that are members of it fund the Real Estate ISAC. In addition, some ISACs receive funding from the federal government for such purposes as helping to start operations, funding memberships, and providing expanded capabilities. Examples include the following: The Public Transit ISAC initially received a $1.2 million grant from the Federal Transit Administration (FTA) to begin operations. Members pay no an annual fee and there are no membership requirements from the association that started the ISAC—the American Public Transportation Association. For FY 2004, the Water ISAC received a $2 million grant from EPA to cover annual operating costs, including the expansion of memberships to smaller utilities. The Financial Services ISAC received $2 million dollars from the Department of the Treasury to enhance its capabilities, including technology to broaden membership service. The Highway ISAC received initial funding from DHS’s Transportation Security Administration (TSA) to start the ISAC. The Energy ISAC received federal grants to assist entities within its separate sectors to be members. DHS provides funding for the operation of the Telecommunications ISAC that is combined with in-kind services provided by the corporate participants. DHS also fully operates the Emergency Management and Response ISAC. States also provide funding for ISACs. For example, the Multi-state ISAC is funded by and functions as part of the New York State Cyber Security Analysis Center. In addition, the Research and Education Networking ISAC is supported by Indiana University. Sharing mechanisms—ISACs use various methods to share information with their members, other ISACs, and the federal government. For example, they generally provide their members access to electronic information via e-mail and Web sites. For example, the Chemical ISAC members receive e-mail alerts and warnings in addition to the information that is posted to the ISAC’s Web site. The Highway ISAC provides members on its Web site with links to IT resources. Some ISACs also provide secure members-only access to information on their Web sites. For example, the Financial Services ISAC’s Web site offers multiple capabilities for members at the premier level and above, including, among other things, access to news, white papers, best practices, and contacts. The Energy ISAC offers its members access to a secure Web site. In addition, some ISACs hold conference calls for their members. For example, the Chemical ISAC holds biweekly conference calls with DHS. The Financial Services ISAC also conducts threat intelligence conference calls every two weeks for premier members and above with input from Science Applications International Corporation (SAIC) and DHS. These calls discuss physical and cyber threats, vulnerabilities and incidents that have occurred during the previous two weeks, and they provide suggestions on what may be coming. The Financial Services ISAC is capable of organizing crisis conference calls within an hour of the notification of a Crisis Alert, and it hosts regular biweekly threat conference calls for remediation of vulnerabilities (viruses, patches). ISACs also use other methods to communicate. For example, they may use pagers, phone calls, and faxes to disseminate information. In addition, the Telecommunications ISAC uses the Critical Infrastructure Warning Information Network (CWIN). The Financial Services ISAC also sponsors twice yearly members’ only conferences to learn and share information. According to the ISAC Council, its membership possesses an outreach and connectivity capability to approximately 65 percent of the U.S. private critical infrastructure. However, the ISACs use various matrices to define their respective sectors’ participation in their activities. For example, the Banking and Finance sector has estimated that there are more than 25,000 financial services firms in the United States. Of those, according to the Financial Services ISAC Board, roughly 33 percent receive Urgent and Crisis Alerts through license agreements with sector associations; these firms account for the vast majority of total commercial bank assets, the majority of assets under management, and the majority of securities/ investment bank transactions that are handled by the sector, but less than half the sector’s insurance assets. According to an American Public Transportation Association official, the Public Transit ISAC covers a little less than 5 percent of the public transit agencies; however, those agencies handle about 60 to 70 percent of the total public transit ridership. Further, according to NERC officials, virtually all members of NERC are members of the Electricity Sector ISAC. As for the Energy ISAC, officials stated that its 80-plus members represent approximately 85 percent of the energy industry. Membership in the Information Technology ISAC also represents 85 to 90 percent of the industry, including assets of Internet equipment hardware, software, and security providers. For other ISACs, such as Chemical and Real Estate, officials stated that it is difficult to determine the percentage of the sector that is included. Table 2 provides a summary of the characteristics of the ISACs that we included in our review. In addition to these ISACs, the Healthcare sector is continuing to organize, including efforts to establish an ISAC. According to DHS officials, the Emergency Law Enforcement ISAC that was formally operated by the NIPC and transferred to IAIP is not currently staffed and will be considered in current efforts to organize the Emergency Services sector. As discussed earlier, federal CIP policy establishes the position of sector coordinator for identified critical infrastructure sectors to initiate and build cooperative relationships across an entire infrastructure sector. In most cases, sector coordinators have played an important role in the development of their respective infrastructure sectors’ ISACs. In many cases the sector coordinator also manages or operates the ISAC. The North American Electric Reliability Council, as sector coordinator for the electricity segment of the energy sector, operates the Electricity Sector ISAC. The Association of American Railroads, as a sector coordinator for the transportation sector, manages the Surface Transportation ISAC. The Association of Metropolitan Water Agencies, as the sector coordinator for the water and wastewater sector, manages the Water ISAC. In addition, regarding the telecommunications ISAC, sector coordinators participate as members of the ISAC. For example, the Cellular Telecommunications and Internet Association, the United States Telecom Association, and the Telecommunications Industry Association are all members of the NCC, which operates the telecommunications ISAC. In the case of the Financial Services ISAC, no formal relationship exists between the Banking and Finance Sector Coordinator, the Financial Services Sector Coordinating Council, and the ISAC; however, according to Financial Services ISAC officials, there is a good relationship between them. Other ISACs were created and are operated without a formal sector coordinator in place, including the Chemical, Emergency Management and Response, and Food ISACs. Eleven ISACs created an ISAC Council to work on various operational, process, and other common issues to effectively analyze and disseminate information and, where possible, to leverage the work of the entire ISAC community. The ISACs initiated this effort without federal sponsorship. Currently, the participating ISACs include Chemical, Electricity, Energy, Financial Services, Information Technology, Public Transit, Surface Transportation, Telecommunications, Highway, and Water. In addition, the Multi-state and Research and Education Networking ISACs are participants. In February 2004, the council issued eight white papers to reflect the collective analysis of its members and to cover a broad set of issues and challenges, including Government/Private-Sector Relations. Explains the need for DHS to clarify its expectations and to develop roles and responsibilities for the ISACs. HSPD-7 Issues and Metrics. Describes specific issues related to the private sector that DHS should address when responding to HSPD-7. Information Sharing and Analysis. Identifies future goals that the ISACs may want to work on achieving, including developing an implementation plan. Integration of ISACs into Exercises. Discusses the importance of the ISACs and the private infrastructure sectors being involved in government exercises that demonstrate responses to possible incidents. ISAC Analytical Efforts. Describes the various levels of capabilities that individual ISACs may want to consider supporting, including cyber and physical analysis. Policy and Framework for the ISAC Community. Identifies common policy areas that need to be addressed to provide effective, efficient, and scalable information sharing among ISACs and between ISACs and the federal government. Reach of Major ISACs. Describes and identifies the degree of outreach that the ISACs have achieved into the U.S. economy. As of September 2003, the ISAC Council estimated that the ISACs had reached approximately 65 percent of the critical infrastructures they represent. Vetting and Trust. Discusses the processes for sharing information and the need to develop trust relationships among individual ISAC members and among the various ISACs. As outlined in HSPD-7 and presented in table 1, DHS and other federal agencies are designated as sector-specific agencies for the critical infrastructure sectors identified. In addition, DHS is responsible for coordinating the overall national effort to enhance the protection of the critical infrastructure and key resources of the United States and has established organizational structures to address its CIP and information- sharing responsibilities. DHS and the sector-specific agencies have undertaken a number of efforts to address the public/private partnership that is called for by federal CIP policy, and they continue to work on their cooperation and interaction with the ISACs and with each other. The functions DHS provides to each ISAC differ, and its coordination and levels of participation vary for each sector-specific agency. However, the department has undertaken a number of efforts with the ISACs and sector- specific agencies to implement the public/private partnership called for by federal CIP policy. DHS has established functions within the department to support the ISACs and other CIP efforts. IAIP, as the DHS component directly responsible for CIP activities, carries out many of these functions. The Infrastructure Coordination Division within IAIP plays a key role in coordinating with the ISACs concerning information sharing. Nonetheless, ISACs may interact with multiple components of the department. For example, the ISACs may discuss cyber issues with the National Cyber Security Division. According to a DHS official, the department does not intend to establish a single point of contact for ISACs within the department. Rather, the department plans to develop policies and procedures to ensure effective coordination and sharing of ISAC contact information among the appropriate DHS components. In addition, the Infrastructure Coordination Division is in the process of staffing analysts who are responsible for working with each critical infrastructure sector. The analysts would serve as the primary point of contact for the sectors and would address information sharing, coordination, information protection, and other issues raised by the sectors. Further, according to DHS officials, TSA, within the department’s Border and Transportation Security Directorate, is working with organizations in the private sector to establish information-sharing relationships. For example, Surface Transportation ISAC analysts stated that they have a good working relationship with TSA, and TSA’s Operations Center has office space designated for them. In addition, other DHS actions include the following: Last summer, DHS, the Department of Agriculture (USDA), and the Department of Health and Human Services’ (HHS) Food and Drug Administration (FDA) initiated efforts to organize the agriculture and food critical infrastructure sectors to raise awareness and improve security efforts. An introductory conference was held with about 100 leading sector corporations and associations to make the business case for participating in CIP efforts, including the importance of enhancing security and sharing information within the sectors. In December, DHS hosted a 2-day CIP retreat with ISAC representatives, sector coordinators, and high-level DHS and White House Homeland Security Council officials. Participants discussed the needs, roles, and responsibilities of public- and private-sector entities related to information sharing and analysis, incident coordination and response activities, critical infrastructure information requests, and level of DHS funding. During this retreat, DHS participated in the first meeting of the Operational Clarity and Improvement Task Group, which was formed by the ISAC Council and sector coordinators to address the need for a common conceptual framework and to clarify current and future efforts to protect the nation’s critical infrastructure. In January, DHS’s IAIP Directorate held a 2-day conference to describe the information it is analyzing and the use of that information in the partnership with the private sector to discuss information sharing between the federal government and the private sector. In February, the department established the Protected Critical Infrastructure Information (PCII) Program, which enables the private sector to voluntarily submit infrastructure information to the government. DHS’s IAIP Directorate is responsible for receiving submissions, determining if the information qualifies for protection and, if it is validated, sharing it with authorized entities for use as specified in the Critical Infrastructure Information Act of 2002. In addition to the efforts listed above, DHS officials stated that they provide funding to some of the ISACs. For example, DHS has agreed to fund tabletop exercises for the Financial Services, Telecommunications, and Electricity Sector ISACs. DHS anticipates that the tabletop exercises will be completed by August 2004. Also, DHS expects to fund a cross- sector tabletop exercise. According to the Financial Services ISAC, funding for their tabletop exercise is $250,000. Another effort that DHS has undertaken is to maintain regular contact with the ISACs. For example, a DHS analyst specializing in the chemical sector stated that the Chemical ISAC is in daily contact with DHS and that it participates in DHS-sponsored biweekly threat meetings. The department also conducts weekly conference calls with several ISACs, other DHS components, and private-sector organizations to discuss threats and viruses. HSPD-7 designates federal departments and agencies to be sector-specific agencies. These federal agencies, among other things, are to collaborate with the private sector and continue to encourage the development of information-sharing and analysis mechanisms. In addition, sector-specific agencies are to facilitate the sharing of information about physical and cyber threats, vulnerabilities, incidents, potential protective measures, and best practices. Another directive, HSPD-9, establishes a national policy to defend the agriculture and food system against terrorist attacks, major disasters, and other emergencies. Some sector-specific agencies have taken steps to help the ISACs to increase their memberships and breadth of impact within their respective sectors and to improve their analytical and communications capabilities. Environmental Protection Agency (EPA). As noted earlier, EPA is the sector-specific agency for the water sector. According to EPA officials, its Office of Water (Water Security Division), which has been designated as the lead for drinking water and wastewater CIP efforts, is currently revising EPA’s Office of Homeland Security’s Strategic Plan. In addition, the division is working on a General Strategic Plan, to identify measurable goals and objectives and determine how the division will accomplish that work. Further, these officials stated that for fiscal year 2004, EPA issued a $2 million grant to the Water ISAC to enhance its capabilities, for example, to fund 24x7 operations and to increase and support ISAC membership. They also stated that EPA issued $50 million in grants to assist the largest drinking water utilities in conducting vulnerability assessments. There are also state grants to build communications networks for disseminating information, particularly to smaller utility companies. EPA’s Water Security Division also makes publicly available various resources related to water security including, among other things, emergency response guidelines, risk assessment and vulnerability assessment methodologies, and a security product guide. The division has also developed a “Vulnerability Assessment Factsheet” that gives utility companies additional guidance on vulnerability assessments. Moreover, the Water Security Division holds biweekly conference calls with water associations to promote communications between EPA and the private sector, and it provides EPA publications and other information to the Water ISAC through e-mail distribution lists. In addition, the division has 10 regional offices that work with the states. Department of the Treasury (Treasury). As the sector-specific agency for the Banking and Finance sector, Treasury’s Office of CIP and Compliance Policy is responsible for CIP-related efforts. It has developed policy for its role as a sector-specific agency. The policy includes steps to identify vulnerabilities with the assistance of the institutions, identify actions for remediation, and evaluate progress in reducing vulnerabilities. A major effort by Treasury was having consultants work with the Financial Services ISAC’s board of directors to evaluate ways to improve the overall reach and operations of the ISAC. According to Treasury officials, this effort, in part, led to a $2 million grant from Treasury to the ISAC for developing the “next generation” Financial Services ISAC. The one-time grant was earmarked for enhancing the ISAC’s capabilities. Regarding interaction with the Financial Services ISAC, Treasury informally shares high-level threat and incident information with the sector through the ISAC. The department also chairs the Financial and Banking Information Infrastructure Committee (FBIIC), a group of regulators who coordinate regulatory efforts to improve the reliability and security of financial systems. This group has done a number of things to raise awareness and improve the reliability of the institutions. For example, under the sponsorship of the Federal Deposit Insurance Corporation, there are regional outreach briefings that address why the private sector needs to partner with the federal government to improve its security. Moreover, FBIIC has sponsored the 3,600 priority telecommunications circuits for financial institutions under the National Communications System’s Telecommunications Service Priority and Government Emergency Telecommunications Service programs. Department of Energy (DOE). As the sector-specific agency for the Energy and Electricity sectors, DOE’s Office of Energy Assurance is responsible for fulfilling the roles of critical infrastructure identification, prioritization, and protection for the energy sector, which includes the production, refining, and distribution of oil and gas, and electric power—except for commercial nuclear power facilities. However, DOE does not address situational threats such as natural disasters or power outages with its ISACs because, in part, the ISACs are determining whether it is their role to address these types of threats. Information sharing with the ISACs is an informal process, and no written policy exists. For example, DOE is collecting threat information related to hackers and computer security, but the department is not disseminating it to the ISACs or to private industry. The Office of Energy Assurance hopes to clarify and expand on this subject in its International Program Plan, which is currently in draft form. Department of Health and Human Services (HHS). As mentioned earlier, HHS is the sector-specific agency for the public health and healthcare sector, and it shares that role with USDA for the food sector. Currently, there is no ISAC for the healthcare sector. Efforts to organize the healthcare sector have been ongoing. In July 2002, HHS officials and other government and industry participants were invited to the White House conference center to discuss how they wanted to organize the sector. A Healthcare Sector Coordinating Council (HSCC) was formed, and HHS requested that MITRE, its contractor, lend technical support to the new group as it continues to organize the sector and establish an ISAC. In addition, HHS officials stated that the department provided $500,000 for ISAC efforts in fiscal year 2003 and budgeted $1 million for fiscal year 2004. HHS officials stated that the department would likely be agreeable to continuing to provide funding for an ISAC. They also stated that an ISAC could be operational within the next year. In the meantime, HHS is sharing information with the industry through an e-Community group that MITRE has set up on a secure Web site. Agriculture and Food were only recently designated as critical infrastructure sectors and, as with the healthcare sector, efforts to organize the sectors are in the beginning stages. HHS has worked with the Food Marketing Institute-operated Food ISAC since it was established, but the department has focused more of its efforts on organizing the agriculture and food sectors. As we mentioned earlier, HHS helped initiate efforts to organize the sector by holding an introductory conference last summer for about 100 leading sector corporations and associations to make the business case for participating in CIP efforts. Recently, the department cohosted a meeting with DHS and USDA in which industry participants were asked how they wished to organize into an infrastructure sector, including addressing the existence and expansion of the current Food ISAC. As a result of this meeting, participants agreed to establish a council of about 10-15 private-sector food and agriculture organizations to represent the sector. A federal government council will be created to interact with the private sector and with state and local governments. The government council will initially include several federal government agencies and state and local entities. According to HHS officials, the timeframe for organizing the sector and setting up an expanded Food ISAC has not been determined, but officials anticipated this occurring by fall of 2004. Department of Agriculture (USDA). As mentioned above, USDA shares with HHS the sector-specific agency designation for the food sector. USDA participated in a conference held last summer and a recent meeting with the industry. In addition to those events, USDA’s Homeland Security Council Working Group is involved in enhancing the agriculture sector’s information-sharing and analysis efforts, which may include replacing or improving the current Food ISAC. Another USDA effort uses training to reach out to the industry and raise awareness. For example, USDA is providing training to private-sector veterinarians and animal hospitals on recognizing possible signs of bioterrorism activity. Although no longer a sector-specific agency for the transportation sector, DOT, through its Federal Transit Administration, has provided a grant to the Public Transportation ISAC to provide for memberships at no cost. Our discussions with the ISACs and the series of ISAC Council white papers confirmed that a number of challenges remain to the successful establishment and operation of ISACs and their partnership with DHS and other federal agencies. Highlighted below are some of the more significant challenges identified, along with any successful ISAC practices and related actions that have been taken or planned by DHS or others. Many of the ISACs report that they represent significant percentages of their industry sectors; at least one—the Electricity ISAC—reports participation approaching 100 percent. The ISAC Council estimates that the overall ISAC community possess an outreach and connectivity capability to reach approximately 65 percent of the private critical infrastructure. The Council also recognizes the challenge of increasing sector participation, particularly to reach smaller entities that need security support, but have insufficient resources to actively contribute and pay for such support. Officials in DHS’s IAIP acknowledge the importance of reaching out to critical infrastructure entities, and are considering alternatives to address this issue. The Financial Services ISAC provides a notable example of efforts to respond to this challenge. Specifically, officials for this organization reported that, as of March 2003, its members represented a large portion of the sector’s assets, but only 0.2 percent of the number of entities with small financial services firms and insurance companies, in particular, were underrepresented. To increase its industry membership, this organization established its next generation ISAC, which provides different levels of service—ranging from a free level of basic service to fees for value-added services—to help ensure that no entity is excluded because of cost. Further, it has set goals of delivering urgent and crisis alerts to 80 percent of the Banking and Finance sector by the end of 2004 and to 99 percent of the sector by the end of 2005. To help achieve these goals, the Financial Services ISAC has several other initiatives under way, including obtaining the commitment of the Financial Services Sector Coordinating Council (FSSCC—the sector coordinator and primary marketing arm for this ISAC) to drive the marketing campaign to sign up its members for the appropriate tier of service; encourage membership through outreach programs sponsored by the Federal Deposit Insurance Corporation and the FSSCC in 24 cities; and to work with individual sector regulators to include in their audit checklists whether a firm is a member of the ISAC. The Financial Services ISAC believes that its goals are attainable and points to its industry coverage, which it says had already increased to 30 percent in March 2004—only three months after its new membership approach began in December 2003. Other issues identified that were related to increasing sector participation and reach included the following, Officials at two of the ISACs we contacted considered it important that the federal government voice its support for the ISACs as the principal tool for communicating threats. The ISAC Council has suggested that a General Business ISAC may need to be established to provide baseline security information to those general businesses that are not currently supported by an ISAC. Many of the industries that comprise our nation’s critical infrastructures are international in scope. Events that happen to a private infrastructure or public sector organization in another country can have a direct effect in the United States, just as events here could have effects in other countries. Therefore, an ISAC may need to increase its reach to include the reporting and trust of international companies and organizations. A key element in both establishing an ISAC and developing an effective public/private partnership for CIP is to build trusted relationships and processes. From the ISAC perspective, sharing information requires a trusted relationship between the ISAC and its membership, such that companies and organizations know their sensitive data is protected from others, including competitors and regulatory agencies. According to the ISAC Council, the ISACs believe that they provide a trusted information- sharing and analysis mechanism for private industry in that they manage, scrutinize, establish, and authenticate the identity and ensure the security of their membership, as well as ensuring the security of their own data and processes. Other steps taken by ISACs to safeguard private companies’ information, which may help to foster trusted relationships, included sharing information with other entities only when given permission to do so by the reporting entity and providing other protections, such as distributing sensitive information to subscribers through encrypted e-mail and a secure Web portal. Building trusted relationships between government agencies and the ISACs is also important to facilitating information sharing. In some cases, establishing such relationships may be difficult because sector-specific agencies may also have a regulatory role; for example, the Environmental Protection Agency has such a role for the Water sector and HHS’ Food and Drug Administration has it for portions of the Food and Agriculture sectors. Sharing information between the federal government and the private sector on incidents, threats, and vulnerabilities continues to be a challenge. As we reported last year, much of the reluctance by ISACs to share information has focused on concerns over potential government release of that information under the Freedom of Information Act, antitrust issues resulting from information sharing within an industry, and liability for the entity that discloses the information. However, our recent discussions with the ISACs—as well as the consensus of the ISAC Council—identified additional factors that may affect information sharing by both the ISACs and the government. The ISACs we contacted all described efforts to work with their sector- specific agencies, as well as with other federal agencies, ISACs, and organizations. For example, the Public Transit ISAC said that it provides a critical link between the transit industry, DOT, TSA, DHS, and other ISACs for critical infrastructures and that it collects, analyzes, and distributes cyber and physical threat information from a variety of sources, including law enforcement, government operations centers, the intelligence community, the U.S. military, academia, IT vendors, the International Computer Emergency Response Community, and others. Most ISACs reported that they believed they were providing appropriate information to the government but, while noting improvements, still had concerns with the information being provided to them by DHS and/or their sector- specific agencies. These concerns included the limited quantity of information and the need for more specific, timely, and actionable information. In particular, one ISAC noted that it receives information from DHS simultaneously with or even after news reports, and that sometimes the news reports provide more details. In its recent white papers, the ISAC Council also has identified a number of barriers to information sharing between the private sector and government. These included the sensitivity of the information (such as law enforcement information), legal limits on disclosure (such as Privacy Act limitations on disclosure of personally identifiable information), and contractual and business limits on how and when information is disclosed (e.g., the Financial Services ISAC does not allow any governmental or law enforcement access to its database). But the Council also emphasized that perhaps the greatest barriers to information sharing stem from practical and business considerations in that, although important, the benefits of sharing information are often difficult to discern, while the risks and costs of sharing are direct and foreseeable. Thus, to make information sharing real, it is essential to lower the practical risks of sharing information through both technical means and policies, and to develop internal systems that are capable of supporting operational requirements without interfering with core business. Consequently, the technical means used must be simple, inexpensive, secure, and easily built into business processes. According to the Council, the policy framework must reduce perceived risks and build trust among participants. Further, the Council identified three general areas that must be addressed in policy for the information- sharing network to assure network participants that there is good reason to participate and that their information will be dealt with appropriately. These areas concern policies related to what information is shared within ISACs, across ISACs, and to and from government; actions to be performed at each node in the information-sharing network, including the kinds of analysis to be performed; and the protection of shared information and analysis in terms of both limitations on disclosure and use and information security controls. The white papers also described the processes that are believed to be needed to ensure that critical infrastructure and/or security information is made available to the appropriate people with reasonable assurance that it cannot be used for malicious purposes or indiscriminately redistributed so as to become essentially public information. These processes and other information-sharing considerations and tasks identified by the Council included the following: The ISAC information-sharing process needs to recognize two types of information categories—classified and sensitive but unclassified. However, the majority of information sharing must focus on the unclassified “actionable element” that points the recipient to a problem and to remediation action. Each ISAC is responsible for initially validating the trust relationship with its member organizations and for periodically reassessing that trust relationship. The security structure must understand and continually be in dialogue with its vetted members and must manage this trusted relationship. Each individual who receives shared information must have a background check completed by and at a level of comprehensiveness specified by the sponsoring organization. Consequences and remediation must be developed and understood to address situations in which information is disclosed improperly—either intentionally or unintentionally. The government’s data and information requirements for the sectors and the sectors’ requirements for the government need to be defined. The government should establish a standing and formal trusted information-sharing and analysis process with the ISACs and sector coordinators as the trusted nodes for this dissemination. This body should be brought in at the beginning of any effort, and DHS products should be released to this group for primary and priority dissemination to their respective sectors. Building this trusted information-sharing and analysis process is also dependent on the protections the government provides for the sensitive data shared by ISACs and private companies. As discussed earlier, DHS recently issued the interim rule for submitting protected critical infrastructure information, which provides restrictions on the use of this information and exempts it from release under the Freedom of Information Act. However, it remains to be seen whether these protections will encourage greater private-sector trust and information sharing with the federal government. Federal CIP law and policies, including the Homeland Security Act of 2002, the National Strategy to Secure Cyberspace, and HSPD-7, establish CIP responsibilities for federal agencies, including DHS and others identified as sector-specific agencies for the critical infrastructure sectors. However, the ISACs believe that the roles of the various government and private- sector entities involved in protecting critical infrastructures must continue to be identified and defined. In particular, officials for several ISACs wanted a better definition of the role of DHS with respect to them. Further, officials for two ISACs thought other agencies might more appropriately be their sector-specific agencies. Specifically, the Energy ISAC would like its sector-specific agency to be DHS and not the Department of Energy, which is also the regulatory agency for this sector. On the other hand, the Highway ISAC thought its sector-specific agency should be the Department of Transportation—the regulatory agency for its sector—and not DHS. The ISAC Council also identified the need for DHS to establish the goals of its directorates and the relationships of these directorates with the private sector. The Council also wants clarification of the roles of other federal agencies, state agencies, and other entities—such as the National Infrastructure Advisory Council. Ten of the ISACs we contacted, plus the Healthcare sector, emphasized the importance of government funding for purposes including creating the ISAC, supporting operations, increasing membership, developing metrics, and providing for additional capabilities. According to ISAC officials, some have already received federal funding: the Public Transit ISAC initially received a $1.2 million grant from the Federal Transit Administration to begin operations, and the Water ISAC received a $2 million grant from EPA for fiscal year 2004 to cover annual operating costs and expand memberships to smaller utilities. In addition, the Financial Services ISAC received $2 million from the Department of the Treasury to help establish its next-generation ISAC and its new capabilities, including adding information about physical threats to the cyber threat information it disseminates. Despite such instances, funding continues to be an issue, even for those that have already received government funds. For example, the Healthcare Sector Coordinating Council, which is the sector coordinator for the healthcare industry, is currently looking to the federal government to help fund the creation of a Healthcare ISAC. Also, officials at the Public Transit ISAC noted that funding is an ongoing issue that is being pursued with DHS. Officials at the Financial Services ISAC, who notes that the ISAC’s goal is to become totally self-funded through membership fees by 2005, are also seeking additional government funding for other projects. The ISAC Council has also suggested that baseline funding is needed to support core ISAC functionalities and analytical efforts within each sector. The Council’s suggestions include that the government should procure a bulk license for the ISACs to receive data directly from some vulnerability and threat sources and access to analytical or modeling tools and that the funding for an ISAC analyst to work at DHS to support analysis of sector- specific information or intelligence requirements. According to the Financial Services ISAC, DHS has agreed to fund tabletop exercises for some ISACs. For example, according to DHS officials, exercises are occurring this week involving the Banking and Finance sector and exercises for other sectors are currently being explored. In addition, energy sector-related exercises were held earlier in the year. DHS officials also stated that funding considerations for the critical infrastructure sectors and the ISACs would be based on their needs. In our discussions with ISAC officials, several, such as officials from the Surface Transportation and the Telecommunications ISACs, highlighted their analysis capabilities and, in particular, their analysts’ sector-specific knowledge and expertise and ability to work with DHS and other federal agencies. The ISAC Council also emphasized that analysis by sector- specific, subject matter experts is a critical capability for the ISACs, intended to help identify and categorize threats and vulnerabilities and then identify emerging trends before they can affect critical infrastructures. Sector-specific analysis can add critical value to the information being disseminated, with products such as 24/7 immediate, sector-specific, physical, cyber, all threat and incident report warning; sector-specific information and intelligence requirements; forecasts of and mitigation strategies for emerging threats; and cross-sector interdependencies, vulnerabilities, and threats. The Council also emphasized that although government analytical efforts are critical, private-sector analytical efforts should not be overlooked and must be integrated into the federal processes for a more complete understanding. The private sector understands its processes, assets, and operations best and can be relied upon to provide the required private- sector subject matter expertise. In a few cases, the integration of private-sector analytical capabilities with DHS does occur. For example, the Telecommunications ISAC, as part of DHS’s National Communication System, has watch standers that are part of the DHS operations center and share information, when the information owner allows it and when it is appropriate and relevant, with the other analysts. In addition, a Surface Transportation ISAC analyst also participates in the DHS operations center on a part-time basis to offer expertise and connection to experts in the field in order to clarify the impact of possible threats. The ISAC Council highlighted the need for ISAC participation in the national-level homeland security exercises that are conducted by the federal government, such as DHS’s May 2003 national terrorism exercise (TOPOFF 2), which was designed to identify vulnerabilities in the nation’s domestic incident management capability. However, according to the Council, there has been little or no integration of active private industry and infrastructure into such exercises. For example, private industry participation in TOPOFF 2 was simulated. The Council believes that with such participation, both national and private-sector goals could be established during the creation of the exercise and then addressed during the exercise. The Council did identify examples where the private sector is being included in exercises, such as efforts by the Electronics Crime Unit of the U.S. Secret Service to reach out to the private sector and support tabletop exercises to address the security of private infrastructures. Further, according to a DHS official, the department has agreed to fund tabletop exercises for members of several ISACs, including Financial Services, Chemical, and Electricity, as well as a cross-sector tabletop exercise. Additional challenges identified by our work and/or emphasized by the ISAC Council included the following. Obtaining Security Clearances to Share Classified Information. As we reported last year, several ISACs identified obtaining security clearances as a challenge to government information sharing with the ISACs. Seven of the 15 ISACs with which we discussed this issue indicated either that some of their security clearances were pending or that additional clearances would be needed. Identifying Sector Interdependencies. Federal CIP policy has emphasized the need to identify and understand interdependencies between infrastructure sectors. The ISAC Council also highlighted the importance of identifying interdependencies and emphasized that they require partnerships between the sectors and the government and could only be modeled, simulated, or “practiced” once the individual sectors’ dynamics are understood sufficiently. The current short-term focus for the ISACs is to review the work done by the government and the sectors regarding interdependencies. Similarly, a DHS official acknowledged the importance of identifying interdependencies, but that it is a longer-term issue. Establishing Communications Networks. Another issue raised through the ISAC Council’s white papers was the need for a government-provided communications network for secure information sharing and analysis. Specifically, the Council suggested that although functionality would be needed to satisfy the ISAC s’ requirements, DHS’s Critical Infrastructure Warning Information Network (CWIN) could be used as an interim, first- phase communications capability. According to the Council, some of the ISACs are conducting routine communications checks at the analytical level in anticipation of expanded use of CWIN. In discussing this issue with a DHS official, he said that ISAC access to a secure communications network would be provided as part of the planned Homeland Security Data Network (HSDN). DHS recently announced a contract to initiate the implementation of HSDN, which is be a private, certified, and accredited network that provides DHS officials with a modern IT infrastructure for securely communicating classified information. According to DHS, this network will be designed to be scalable in order to respond to increasing demands for the secure transmission of classified information among government, industry, and academia to help defend against terrorist attacks. At the time of our study, the relationship and interaction among DHS, the ISACs, sector coordinators, and other sector-specific agencies was still evolving, and DHS had not yet developed any documented policies or procedures. As we discussed earlier, HSPD-7 requires the Secretary of Homeland Security to establish uniform policies for integrating federal infrastructure protection and risk management activities within and across sectors. According to a DHS official, the department is developing a plan (referred to as a “roadmap”) that documents the current information- sharing relationships among DHS, the ISACs, and other agencies; goals for improving that information-sharing relationship; and methods for measuring the progress in the improvement. According to this official, the plan is to define the roles and responsibilities of DHS, the ISACs, and other entities, including a potential overlap of ISAC-related responsibilities between IAIP and the Transportation Security Administration. Further, the official indicated that, in developing the plan, DHS would consider issues raised by the ISAC Council. In summary, since first encouraged by federal CIP policy almost 6 years ago, private-sector ISACs have developed and evolved into an important facet of our nation’s efforts to protect its critical infrastructures. They face challenges in increasing their sector representation and, for some, ensuring their long-term viability. But they have developed important trust relationships with and between their sectors—trust relationships that the federal government could take advantage of to help establish a strong public/private partnership. Federal agencies have provided assistance to help establish the ISACs, and more may be needed. However, at this time, the ISACs and other stakeholders, including sector-specific agencies and sector coordinators, would benefit from an overall strategy, as well as specific guidance, that clearly described their roles, responsibilities, relationships, and expectations. DHS is beginning to develop a strategy, and in doing so, it will be important to consider input from all stakeholders to help ensure that a comprehensive and trusted information- sharing process is established. Messrs. Chairmen, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittees may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-3317 or Ben Ritt, Assistant Director, at (202) 512-6443. We can also be reached by e-mail at daceyr@gao.gov and rittw@gao.gov, respectively. Other individuals making key contributions to this testimony included William Cook, Joanne Fiorino, Michael Gilmore, Barbarol James, Lori Martinez, and Kevin Secrest. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Critical infrastructure protection (CIP) activities that are called for in federal policy and law are intended to enhance the security of the cyber and physical public and private infrastructures that are essential to our nation's security, economic security, and public health and safety. As our reliance on these infrastructures increases, so do the potential threats and attacks that could disrupt critical systems and operations. Effective information-sharing partnerships between industry sectors and government can contribute to CIP efforts. Federal policy has encouraged the voluntary creation of Information Sharing and Analysis Centers (ISACs) to facilitate the private sector's participation in CIP by serving as mechanisms for gathering and analyzing information and sharing it among the infrastructure sectors and between the private sector and government. This testimony discusses the management and operational structures used by ISACs, federal efforts to interact with and support the ISACs, and challenges to and successful practices for ISACs' establishment, operation, and partnerships with the federal government. Federal awareness of the importance of securing the nation's critical infrastructures--and the federal government's strategy to encourage cooperative efforts among state and local governments and the private sector to protect these infrastructures--have been evolving since the mid- 1990s. Federal policy continues to emphasize the importance of the ISACs and their information-sharing functions. In addition, federal policy established specific responsibilities for the Department of Homeland Security (DHS) and other federal agencies involved with the private sector in CIP. The ISACs themselves, although they have similar missions, were developed to serve the unique needs of the sectors they represent, and they operate under different business models and funding mechanisms. According to ISAC representatives and a council that represents many of them, a number of challenges to their successful establishment, operation, and partnership with DHS and other federal agencies remain. These challenges include increasing the percentage of entities within each sector that are members of its ISAC; building trusted relationships and processes to facilitate information sharing; overcoming barriers to information sharing, clarifying the roles and responsibilities of the various government and private sector entities that are involved in protecting critical infrastructures; and funding ISAC operations and activities. According to a DHS official, these issues are being considered, and the department is developing a plan that will document the current information-sharing relationships among DHS, the ISACs, and other agencies; goals for improving those information-sharing relationships; and methods for measuring progress toward these goals.
MDA’s BMDS is being designed to counter ballistic missiles of all rang short, medium, intermediate, and long. Since ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is employing an integrated and layered architecture to provide multiple opportunities to destroy ballistic missiles before they can reach their targets. The system’s architecture includes networked space-based sensors as well as ground- and sea-based radars, ground- and sea-based interceptor missiles, and a command and control, battle management, and communications network providing the warfighter with the necessary communication links to the sensors and interceptor missiles. A possible engagement scenario to defend against an intercontinental ballistic missile would occur as follows: Infrared sensors aboard early-warning satellites detect the hot plume of a missile launch and alert the command authority of a possible attack. Upon receiving the alert, land- or sea-based radars are directed to track the various objects released from the missile and, if so designed, to identify the warhead from among spent rocket motors, countermeasures, and debris. When the trajectory of the missile’s warhead has been adequately established, an interceptor—consisting of a kill vehicle mounted atop a booster—is launched to engage the threat. The interceptor boosts itself toward a predicted intercept point and releases the kill vehicle. The kill vehicle uses its onboard sensors and divert thrusters to detect ead. With a combined closing identify, and steer itself into the warh speed of approximately 10 kilometers per second (22,000 miles per hour), the warhead is destroyed above the atmosphere through a to kill” collision with the kill vehicle. Some interceptors use sensors to steer themselves into the inbound ballistic missile. Inside the atmosphere, weapon systems kill the ballistic missile using a range of mechanisms, such as direct collision between the interceptor missile and the inbound ballistic missile, or using the combined effects of a blast fragmentation warhead (heat, pressure, and shrapnel) in cases where a direct hit does not occur. In the August 2009 BMDS Accountability Report, MDA presents the BMD performance from the perspectives of homeland defense and regional/theater capabilities. Homeland defense uses the capabilities of s Ground-based Interceptors (GBI), Aegis BMD assets, and BMDS radar against the threat from intercontinental and intermediate-range ballist ic missiles, while regional and theater defense use Aegis BMD Standard Missile-3 (SM-3) and THAAD interceptors with mobile radars against threats from medium-range missiles and short-range ballistic missiles. Table 1 provides a brief description of eight BMDS elements that are currently under development by MDA. The new administration proposed significant changes to the BMDS program in 2009 including program terminations and changes to some of the BMDS elements we reported on in the past, as well as changes to plans for missile defense in Europe. Administration proposals culminated in reductions of approximately $1 billion from MDA’s budget request for fiscal year 2010. In the spring of 2009, the Secretary of Defense recommended termination of the Multiple Kill Vehicle element. Originally designed as an optional warhead for all midcourse interceptors, MDA terminated the Multiple Kill Vehicle element because of feasibility issues raised about this technology, which was still in its early stages of development, as well as a decision to refocus MDA’s resources on new technologies aimed at early intercept of ballistic missiles. MDA also terminated its Kinetic Energy Interceptor element because of technical issues, its incompatibility with operational infrastructures, and delays during development. It was originally designed as a mobile land-based missile defense system to destroy medium, intermediate, and intercontinental ballistic missiles during the boost and midcourse phases of their flight. The ABL program was also significantly affected by the Secretary of Defense’s proposal to designate it as a technology program and cancel the plans for the purchase of a second aircraft that would have provided an operational capability. In addition, MDA requested increased funding for the Aegis BMD and THAAD programs for fiscal year 2010 following administration recommendations. MDA plans to use these funds to move both elements toward meeting full funding policies, to increase production for Aegis BMD and THAAD interceptors, to increase the interceptor production rate and number of THAAD batteries, and to increase the number of Aegis BMD ships. MDA is also responding to the new administration’s shift in its approach to European missile defense. In September 2009, DOD altered its approach to European defense, which originally focused on GBIs from the GMD element and a large fixed radar as well as transportable X-Band radars, and is now focusing on providing defenses against long-range threats to the United States and short-, medium-, and intermediate-range Iranian threats to Europe. This new “Phased, Adaptive Approach” consists primarily of Aegis BMD sea-based and land-based systems and interceptors, as well as various sensors to be deployed over time as the various capabilities are matured. According to DOD, this new approach offers a number of improvements over the previous architecture, such as providing missile defenses sooner with greater flexibility to meet evolving threats, providing more opportunities to involve close allies, and delivering greater capability to defend against a large number of threat missiles. In addition, during fiscal year 2009, MDA transitioned to a new Director and the agency’s development effort was rebalanced to focus more on regional/theater missile defense. This rebalancing included shifting technology development efforts from boost-phase intercept technologies to early intercept technologies (or ascent phase). MDA officials state that because early intercept technology initiates intercept as early as possible to execute a shoot-look-shoot tactic and defeat a threat before countermeasures are deployed, it will ultimately reduce the number of interceptors required to defeat a raid of threat missiles and save on the costs of maintaining a significant number of expensive interceptors to destroy advanced countermeasures in a later phase of a threat missile’s flight. According to the MDA Director, this technology will force the deployment of countermeasures early in flight where they are less effective. In June 2009, MDA also began to change its acquisition management strategy. From its inception in 2002 to December 2007, MDA managed the acquisition of missile defense capabilities by organizing the development effort into 2-year increments known as blocks. Each block was intended to provide the BMDS with capabilities that enhanced the development and overall performance of the system. The first 2-year block—Block 2004— fielded a limited initial capability that included early versions of the GMD, Aegis BMD, Patriot Advanced Capability-3, and C2BMC elements as well as various sensors. The agency’s second 2-year block—Block 2006— culminated on December 31, 2007, and fielded additional BMDS assets. On December 7, 2007, according to MDA in response to recommendations from GAO, MDA’s Director announced a new acquisition management strategy to better communicate its plans and goals to Congress. The agency’s new approach was based on fielding capabilities that address particular threats as opposed to a biennial time period. This approach divided fielding capabilities into five blocks. The capabilities-based five-block approach included several positive changes, including the commitment by DOD to establish total acquisition costs and unit cost for selected block assets, including in a block only those elements or components that will be fielded during the block, and abandoning the practice of deferring work from one block to another. MDA was still transitioning to this new capabilities-based block approach when the MDA Director terminated it in June 2009—a year and a half after it was created. According to MDA, the agency terminated the capability- based block structure to address the explanatory statement accompanying the Department of Defense Appropriations Act, 2009, which stated that MDA’s “justification materials should no longer be presented in the Block format, but rather by fiscal year for each activity within the program element.” The agency has decided that it will manage the BMDS as a single integrated program and is in the process of determining how it will implement changes to its acquisition management strategy. In fiscal year 2009, MDA achieved several noteworthy accomplishments. For example, MDA revised its testing approach to better align tests with modeling and simulation needs and is undertaking a new targets development effort to resolve long-standing problems supplying sufficient and reliable targets. The agency also demonstrated increased levels of performance for some of its BMDS elements through flight and ground testing. MDA testing achievements during the year indicate an increased level of interoperability among multiple elements, improving both system- level performance and advancing the validation of BMDS models and simulations needed to predict performance. In addition, the agency delivered most of the assets as planned by the end of fiscal year 2009. In fiscal year 2009, MDA revised its testing approach in response to GAO and DOD concerns. In March 2009 we reported that MDA’s Integrated Master Test Plan—its test baseline—was not effective for management and oversight because it was revised frequently, only extended through the following fiscal year and was not well integrated with other key aspects of testing such as target acquisitions. Most of the annual revisions to the test baseline were occurring either because MDA changed the substance of tests, changed the timing of tests, or added tests to the baseline. In other instances, MDA canceled planned tests which also affected the test baseline. In addition, the BMDS Operational Test Agency identified several limitations in the previous BMDS test program, including unaccredited models and simulations, flight test artificialities, and inadequate modeling of some environmental conditions. Members of Congress also expressed concern with MDA’s test approach. For example, in the fiscal year 2008 National Defense Authorization Act conference report, conferees noted that MDA failed to ensure an adequate testing program and that its test and targets program needed to be managed in a way that fully supported high-priority near-term programs. MDA extensively revised its test plan in fiscal year 2009 to address many of these concerns. For example, the new Integrated Master Test Plan bases test scenarios on modeling and simulation needs and extends the test baseline through 2015, which allows for better estimation of target needs, range requirements, and test assets. As part of the revised test plan, MDA scheduled dedicated periods of developmental and operational testing, during which the system configuration will remain fixed to allow the warfighter to carry out training, tactics, techniques, and procedures for developmental and operational evaluation. Additionally, the new test plan will provide sufficient time after test events to conduct a full post-test analysis. These improvements are important because BMDS performance cannot be fully assessed until models and simulations are accredited and validated and the test program cannot be executed without meeting its targets needs. In July 2009, MDA also initiated a new target acquisition strategy to address recurring target performance issues and increases in target costs. According to the Director of MDA, this new target approach is based on streamlining a set of classes of targets to increase quality control of an inventory of identical targets that represent general threat characteristics to account for intelligence uncertainties. He further stated that a goal of the new target acquisition strategy is to minimize the number of targets needed to emulate specific threats and establish backup targets, which will be available in 2012. Targets have been a recurring cause of flight test delays, cancellations, and failures since 2006. In the past, we reported that the THAAD program was unable to achieve its first intercept attempt in 2006 because the target malfunctioned. The program also experienced target anomalies in 2007 that precluded the completion of two radar characterization tests. During the same year, the GMD program experienced long-term effects on its flight test schedule when it was unable to achieve all primary test objectives because of a target failure. We also reported in March 2009 that the Aegis BMD program was unable to conduct an intercept because the target was not available. In addition, in its January 2009 report to the defense committees, MDA acknowledged target availability and reliability problems and reported its plan for a new target acquisition strategy to address these issues and improve costs, quality, and reliability. In revising its target acquisition strategy, MDA solicited input from industry in an effort to better understand possible new target solutions that might be available to improve cost, quality, and performance. To leverage industry capability and promote a more competitive contract environment, MDA decided to use multiple contractors with multiple contracts instead of a single prime contract, increasing its flexibility to respond to changing program requirements. The agency plans to award a new contract for each class of target needed to execute the BMDS test plan. MDA will begin making decisions on contract awards and new target designs over the next year. According to program officials, MDA originally planned to issue five requests for proposals for new contracts in fiscal year 2010 and one additional request in fiscal year 2011. However, to reflect changes in the test plan and subsequent changes to the acquisition strategy, the program now plans to issue two requests for proposals in fiscal year 2010 and one in fiscal year 2011. The Targets and Countermeasures program anticipates that the first targets will be delivered under the new strategy in fiscal year 2012, and the first intercontinental ballistic missile target is expected to be delivered in fiscal year 2013. MDA also made progress in several ongoing target development efforts that could enhance the ability to test the BMDS. During fiscal year 2009, the Targets and Countermeasures program made progress in developing four new targets—the LV-2 target, Aegis Readiness Assessment Vehicle-C target, a new medium range target, and the Extended-Long Range Air Launched Target. Each target adds a new capability to MDA’s target portfolio. For example, the LV-2 target provides the potential for significantly expanding the intermediate range payload and range performance over current inventory capabilities. The Aegis Readiness Assessment Vehicle-C target provides a new, low-cost capability as it is designed to contribute additional separating and maneuvering capabilities in short- and medium-range targets. MDA’s new Medium Range Target provides improved kill assessment capability at this range. In addition, the new Extended-Long Range Air Launched Target is a medium-range target that provides a greater range capability than previous air-launched targets and adds the ability to deploy associated objects—a capability not currently available in other similar target types. MDA expects each of these targets to be ready for use in flight tests in fiscal year 2010. In fiscal year 2009, MDA conducted several ground tests and flight tests demonstrating improved performance in several areas of the BMDS including element-level functionality, theater and regional performance, and interoperability. Table 2 identifies key test events achieved in fiscal year 2009 for each element. In June 2009, the ABL program successfully completed its first two tracking tests against boosting missile targets. These tests marked the first time ABL demonstrated a complete low-power engagement sequence against a boosting target. In addition, the ABL was able to demonstrate its ability to fire its high energy laser in an airborne environment during a flight test in August 2009. During this test, the laser was fired into a calorimeter on board the aircraft to capture the laser’s energy and measure performance characteristics of the laser’s beam. The Aegis BMD program also demonstrated increased levels of element performance through Navy fleet exercises and developmental tests. For example, Aegis BMD demonstrated, for the first time, its capability to destroy a ballistic missile in the terminal phase of flight using Standard Missile-2 Block IV missiles while simultaneously conducting a mission using the Standard Missile-2 Block IIIA missile against a cruise missile target. The program also conducted successful developmental component tests for the next generation of the Aegis BMD interceptor—the SM-3 Block IB. Developmental testing will continue into 2010. In addition, the program successfully demonstrated that the latest software release of the Aegis BMD system had the capability to support the program’s next generation interceptor during simulated SM-3 Block IB engagements. The C2BMC program also satisfied multiple test objectives and increased its capability in fiscal year 2009. The program participated in many system- level tests during the year that enabled it to demonstrate multiple capabilities, including improved situational awareness and sensor management. During testing, C2BMC used multi-sensor correlation and provided integrated situational awareness for weapons release decisions. GMD, for the first time, used information from multiple sensors to develop and successfully conduct an intercept of a live target during a flight test. In December 2008, target information from four different sensors and satellite data were input into the GMD fire control system to develop an intercept plan. The involvement of multiple sensors provides better information to develop an engagement. In addition, GMD made progress in addressing BMDS Operational Test Agency concerns regarding the formatting, tracking, and accounting of messages from GMD sensors. For example, MDA added test instrumentation to collect data for regional/theater tests communications. However, the agency still faces ongoing challenges assessing timeliness with the exchange of messages at the strategic level. According to BMDS Operational Test Agency officials, they continue to work with MDA to resolve this issue. Key to the integration and functionality of the BMDS is communications and message traffic. The timely reception of messages from sensors to weapon systems is key to support decisions and achieve effective intercepts. In March 2009, we reported that these data management problems prevented the analysis of the timeliness of message data, according to BMDS Operational Test Agency officials. The STSS program successfully completed the ground testing and integration of components to support the launch of its two demonstration satellites in September 2009. These satellites will use onboard infrared sensors to detect, track, and discriminate ballistic missiles throughout their trajectories. THAAD also demonstrated improved element-level functionality when it successfully launched a salvo of two THAAD interceptors to intercept a separating target inside the earth’s atmosphere. The primary interceptor hit the target and the second interceptor hit the largest remaining piece of target debris seconds later. Regional and theater BMDS assets—Aegis BMD and THAAD—succeeded in demonstrating improved interoperability in fiscal year 2009. For example, during a THAAD intercept test, Aegis BMD tracked a target and provided the information to THAAD’s fire control. As a result, the missile was successfully engaged by THAAD. Additionally, during this test, the forward-based radar supporting THAAD was also able to discriminate the threat reentry vehicle from other objects and provide the information to support the engagement. According to program officials, the THAAD element reported that C2BMC provided accurate and timely status information for the BMDS as well as situational awareness of the test to the warfighter. MDA also demonstrated interoperability for BMDS elements during several ground tests in fiscal year 2009. For example, during one ground test—GTD-03—MDA successfully demonstrated simultaneous theater and regional capabilities using operational BMDS hardware and actual communications between them. In addition, MDA demonstrated simultaneous BMDS capabilities to conduct training while the BMDS network remained operational during this test. This capability allows MDA to conduct development activities while maintaining readiness to engage in missile defense operations. This ground test also allowed several BMDS elements to demonstrate that they could successfully exchange data with other elements. Additionally, in December 2008 numerous elements worked together to support system-level post-flight reconstruction needed to validate BMDS models and simulations. This system-level post-flight reconstruction for flight test FTX-03 was the first ever and was highly successful because different MDA groups achieved the same results, according to MDA officials. MDA took significant steps forward in fiscal year 2009 in developing the modeling and simulation tools necessary to understand BMDS performance against strategic and theater/regional threats. Because the potential combinations of BMDS configurations, intercept scenarios, and missile threats are too numerous for ground and flight testing, assessing overall BMDS performance depends upon the use of models and simulations to understand the capabilities and limitations of the system. Such an end-to-end system-level simulation brings together the capabilities of various element models in order to analyze how the BMDS integrated and fielded radars, communication networks, and interceptors perform during scenarios. However, to work effectively these models and simulations need to be anchored to data from ground and flight tests and validated by independent evaluators—the BMDS Operational Test Agency—in order to have confidence in their results. Moreover, the system-level simulation itself is expected to change over time as additional models become available to represent the evolving BMDS configuration. In March 2009 we reported that MDA experienced several problems in its overall modeling and simulation program, which negatively affected the 2007 performance assessment and led to the cancellation of the 2008 performance assessment. Performance Assessment 2007 was unsuccessful primarily because of inadequate flight and ground test data for verification and validation to support accreditation and a lack of common threat and environment input data among element models. MDA officials canceled their 2008 performance assessment efforts in April 2008 because of developmental risks associated with modeling and simulations, focusing instead on testing and models for Performance Assessment 2009. In fiscal year 2009, MDA made some progress integrating the individual element models and simulations for Performance Assessment 2009. A leading accomplishment was the development of a system-level simulation for regional and theater scenarios in addition to existing strategic scenarios for a more complete analysis of BMDS performance. Performance Assessment 2007 only included homeland defense scenarios against strategic threats. One of MDA’s goals for the performance assessment is the integration of models that communicate like the networked BMDS. As of October 2009, Performance Assessment 2009 achieved interactive communications among the element models and simulations. In addition, MDA achieved consistency in representing the threat missile and post-intercept data among all models and scenarios, which was also a weakness of Performance Assessment 2007. Finally, the BMDS Operational Test Agency observed that conducting Performance Assessment 2009 is helping to build confidence in BMDS-level simulation capability for the subsequent Performance Assessment 2010. In fiscal year 2009, MDA met many of its delivery goals. Four MDA elements—Aegis BMD, GMD, Sensors, and C2BMC—were scheduled to deliver a total of 41 assets and capabilities in fiscal year 2009. MDA delivered 34 of these assets or 83 percent. Table 3 outlines BMDS asset deliveries in fiscal year 2009. Aegis BMD planned to install the Aegis Weapons System 3.6.1 software on 20 ships and deliver 10 SM-3 missiles in fiscal year 2009. The program met its goal to deliver the 10 missiles and began to deliver additional rounds, initially designated for 2010, ahead of schedule. However, the program fell behind on its goal of installing the 3.6.1 software on 20 ships, delivering 18 by the end of the fiscal year 2009. Aegis BMD officials pointed out that all ship sets were available but because of real-world national security situations, these ships were not available for installations in fiscal year 2009. Nonetheless, one of the remaining ships was completed in December 2009 and another will be completed by March 2010. In fiscal year 2009, Aegis BMD also delivered an additional ship set with the next generation Aegis BMD Weapon System, 4.0.1, for a total of 19 ship deliveries. The GMD program also partially met its delivery goals in fiscal year 2009. The program delivered an additional silo at Vandenberg Air Force Base as planned, but lagged in its GBI deliveries. For example, in fiscal year 2009, GMD emplaced three interceptors that were initially planned for fiscal year 2008 and only one of the three interceptors planned for fiscal year 2009. The Sensors program met most of its delivery goals, successfully fielding a new near-term discrimination algorithm, activating an additional AN/TPY-2 radar site, and delivering an additional AN/TPY-2 radar. However, it fell short of meeting all of its delivery goals for the fiscal year. Although the program completed the construction for the Thule radar site ahead of schedule in fiscal year 2008, it was unable to deliver Thule radar communications and upgrades as planned in fiscal year 2009. These activities have been delayed until fiscal year 2010. Finally, C2BMC delivered four additional C2BMC Web browsers, five work stations, and an additional combatant command suite. Additionally, the program office rolled out the Global Engagement Manager suite and added four work stations that support it. However, it was unable to meet its schedule baseline goal of an additional fielding and site activation to declare its next spiral operational. This was due to major program restructures needed to accelerate C2BMC capabilities for other BMDS elements as well as programmatic changes to fulfill warfighter requests and meet new administration direction. While there was progress in addressing concerns about test planning and target development as well as in delivering assets, all BMDS elements experienced delays in conducting tests, were unable to accomplish all planned objectives, and experienced performance challenges. Poor target performance continued to be a problem causing several test delays and leaving several test objectives unfulfilled. The test problems also precluded the agency from gathering key knowledge through tests specified by the MDA Director that were originally planned to be completed in fiscal year 2008. MDA’s efforts to develop advanced algorithms and its efforts to demonstrate homeland defense were also affected by target issues. These shortfalls in testing continued to delay validation of the models and simulations used to assess the overall performance of the BMDS. Consequently, comprehensive assessments of the capabilities and limitations of the BMDS are still not possible. MDA also redefined its schedule baseline, eliminating goals for delivering integrated capabilities so we were not able to assess MDA’s progress in this key area. During fiscal year 2009, although several tests showed progress in individual elements and some system-level capabilities, all BMDS elements experienced test delays and shortfalls in part because of problems with the availability and performance of target missiles. None of the elements conducted all planned tests as scheduled and none achieved all planned objectives. Table 4 outlines BMDS test and target issues in fiscal year 2009. Two BMDS elements—ABL and C2BMC—experienced delays in achieving fiscal year 2009 test events. For example, ABL experienced delays in development and ground testing that resulted in the delay of its first full flight test demonstration until fiscal year 2010. Additionally, C2BMC was unable to conduct testing needed to further develop its next spiral capability because of BMDS-level delays in developing the models and simulations needed to conduct this testing. Major program restructures needed to accelerate C2BMC capabilities for other BMDS elements and programmatic changes to fulfill warfighter requests and meet new administration direction also contributed to C2BMC’s inability to conduct planned fiscal year 2009 testing. As noted in table 4, targets affected the BMDS test program for four elements in fiscal year 2009. The Aegis BMD, GMD, Sensors, and THAAD test program were affected by either target availability or target reliability and performance issues. In fiscal year 2009, targets contributed to a test cancellation and test delays and prevented elements from completing tests or achieving all test objectives. One test for Aegis BMD—FTM-15—was originally projected to use the new Flexible Target Family’s LV-2 target in fiscal year 2008, but because of qualification difficulties, the target was unavailable and the test was not conducted. This test was planned as the first Aegis BMD SM-3 engagement against an intermediate-range target. It was also expected to verify interoperability of Aegis BMD, a Sensors radar, and C2BMC. As of December 2009, MDA had canceled the test and planned to combine several of the FTM-15 objectives with those in a future flight test in 2013—FTM-23. However, as of February 2010, the Director of MDA stated that the test is being rescheduled for 2011. Test documentation was not provided for our review so it remains unclear whether the test will include the original test objectives, target, and BMDS hardware and software configurations. The GMD and Sensors programs were also unable to complete all planned objectives because of a target failure during an intercept test. During a December 2008 flight test—FTG-05—the target failed to release planned countermeasures. A similar target failure was experienced in a prior 2008 test—FTX-03—and MDA’s risk assessments leading up to the FTG-05 test could not determine the root cause of the failure. These risk assessments determined that a similar failure would be “likely” and the consequences “severe” if MDA proceeded with the test in December 2008, even after taking mitigation steps. According to the Defense Contract Management Agency, the cost to execute FTG-05 exceeded $210 million. This was the last planned flight test using this type of target. As a result of the target failure, GMD was unable to assess the Capability Enhancement-I kill vehicle against countermeasures. According to the July 2009 Integrated Master Test Plan, this test is now planned to be conducted in the third quarter of fiscal year 2011—nearly 4 years after this configuration completed fielding. The GMD program had to delay its second planned fiscal year 2009 intercept test—FTG-06—to fiscal year 2010 because pretest analysis raised concerns that the target may not perform as required. This test was important because it was planned as the first test of GMD’s enhanced version of the kill vehicle called the Capability Enhancement II exoatmospheric kill vehicle. This test was also designed to demonstrate a long-flight time for the GBI and GMD’s capability against countermeasures. In early 2009, MDA altered the target to present a more representative threat. Since MDA did not have modeling data to represent the new characteristics of the target, MDA officials were concerned about the target’s expected performance and decided to delay the test. In January 2010, MDA conducted FTG-06. However, all test objectives were not met as the GBI failed to intercept the target as planned. According to an MDA official, a Failure Review Board was convened to investigate the test results, but its investigation is expected to take months to complete. As we reported in March 2009, THAAD program officials had to reschedule the planned fiscal year 2008 BMDS-level event, FTT-10, into fiscal year 2009 because of a target malfunction. THAAD successfully completed this test event in fiscal year 2009. In addition, a Short Range Air Launch Target planned for use in a third quarter fiscal year 2009 THAAD flight test FTT-11, had a component failure and subsequently needed to be requalified. This failure caused the THAAD program to modify its plannedflight test objectives and move the test into fiscal year 2010, also resulting in delays to a subsequent test—FTT-12. FTT-11 was conducted in December 2009 but could not be completed due to failure of the target missile. The air-launched target was successfully deployed from a transport aircraft, but the target’s rocket motor did not ignite. The THAAD interceptor was not launched and test objectives were not achieved. According to the Director of MDA, the Failure Review Board was concluding its investigation of the root cause of this failure. The board’s report was not available during our audit. Target reliability and failures in fiscal year 2009 also prevented several elements from achieving all planned objectives. In March 2009, Aegis BMD experienced target difficulties when two refurbished lower-cost Army targets for a short-range mission fell short of their expected trajectory. One target was outside the intercept control area and Aegis BMD was not able to fire the interceptor because of safety limitations. In the second test, the target, while short of its expected trajectory, fell in the intercept control area and was successfully intercepted. It will be several years before MDA’s new approach to target development and acquisitions will be fully implemented because most targets needed through fiscal year 2011 are already under contract and will not be affected by the new strategy. The activities under existing contracts will not be complete until 2013. Moreover, MDA’s implementation of a new acquisition management strategy does not necessarily mean that any particular target currently being used, such as the LV-2, will be phased out of the test program. MDA could decide to continue to use an existing target under the new strategy, and as a result, some existing target missiles could continue to be procured under new contracts. MDA has not presented a complete business case for proceeding with a new target acquisition management strategy. A complete business case includes establishing top-level cost, schedule, and performance baselines available internally and externally for oversight. It is the essential first step in any acquisition program because it sets the stage for acquisition and execution. Program officials told us that they would have cost, schedule, and performance baselines finalized and documented as part of the decision to proceed with new contract awards. These baselines, however, will be very detailed and spread across multiple documents and therefore are unsuitable for internal and external oversight. The officials further stated that they do not intend to establish top-level cost, schedule, and performance baseline measures similar to approved program baselines that are established for DOD’s major defense acquisitions to provide accountability. In September 2008, we reported that MDA had difficulty in developing and supplying new targets in part because a sound business case was not developed before significant decisions were made. In that report we recommended that MDA develop cost, schedule and performance baselines as part of an effort to establish a sound business case for each new class of target under development. As part of the new target development efforts, MDA also developed a new cost model. However, because the cost model and test baseline are continually updated, the Targets and Countermeasure program continues to lack solid cost baselines against which progress can be measured. According to the Director of MDA, the agency will continue to update its cost model as the Integrated Master Test Plan changes, noting that where the technical content of the test plan remains constant, cost, schedule, and performance baselines can be measured from year to year. However, as we reported in March 2009, the Integrated Master Test Plan changes frequently. In fact, the latest approved version is dated July 2009, and according to MDA’s Director, a revised version of the Integrated Master Test Plan is expected in March 2010, which limits the baseline’s stability to approximately 8 months and limits our ability to measure MDA’s progress against a cost baseline. MDA’s ability to develop an accurate cost baseline is also affected by the lack of historical data available for targets or for other similar missiles. Program officials said that they are now collecting more useful cost data for new contracts by requiring more detailed cost reporting from their contractors. This approach will allow program officials to gather more complete and accurate data over time to make the new cost model a more powerful cost estimating tool. The inability of MDA to successfully conduct its test plan precluded the agency from collecting critical information needed for key decisions and significantly affected development of advanced algorithms and homeland defense capabilities. In fiscal year 2009, MDA was unable to accomplish any of the Director’s knowledge points that were to be achieved through tests. Several of these tests were originally planned for fiscal year 2008, but were delayed into 2009 and then again delayed into fiscal years 2010 and 2011. Table 5 shows the original test date and MDA’s current estimate for obtaining the necessary knowledge. Target issues continued to affect MDA’s ability to fully develop algorithms needed for discrimination capability. In March 2009, we reported that multiple elements experienced test failures which caused delays in collecting data needed to develop discrimination capability. For example, in 2007, two THAAD radar characterization tests were unsuccessful because of target anomalies. These tests were designed with characteristics needed for radar observation in support of advanced discrimination algorithm development. However, target problems prevented an opportunity for the radar to exercise all of the planned algorithms, causing a loss of expected data. Similarly, in a 2008 sensor characterization test, the target failed to release its countermeasures, which prevented the sensors from collecting expected data. Consequently, MDA was unable to fully develop discrimination algorithms as planned. In fiscal year 2009, MDA continued to be unable to develop its advanced algorithms as planned as key tests that were designed to reduce the maturation risk were affected by targets. For example, the Sensors and GMD elements were unable to collect data to develop their advanced algorithms when the target failed to release countermeasures and present the expected scene complexity during FTG-05. The subsequent delay to the next intercept test—FTG-06—until January 2010 has also reduced the data MDA had expected in fiscal year 2009 for the development of discrimination capability. Additionally, target unavailability caused MDA to delay a THAAD test—FTT-11—from fiscal year 2009. This test was designed to provide data for the development of advanced algorithms for the THAAD radars. The test was conducted in fiscal year 2010 but could not be completed because the target malfunctioned during deployment. According to the Director of MDA, the Failure Review Board was concluding its investigation of the root cause of this failure. The board’s report was not available during our audit. Likewise, GMD continues to experience delays demonstrating increased interceptor performance for homeland defense as the two aforementioned tests—FTG-05 and FTG-06—were not conducted as planned. As we testified in February 2009, MDA had expected to conduct seven GMD interceptor flight tests from the start of fiscal year 2007 through the first quarter of fiscal year 2009. However, MDA was able to conduct only two, which, according to the Director of Operational Test and Evaluation, has limited the complete sets of information necessary for validating ground- based interceptor models. MDA also delayed the other planned flight test, FTG-06, beyond fiscal year 2009 because of target issues and an anomaly with a component of the Sea-Based X-band radar. As of June 2009, MDA estimated this test to cost over $236 million while the Defense Contract Management Agency estimated the cost to exceed $310 million. These costs are likely understated because they do not include all of the cost increases of delaying the test first to September 2009, nor do they include any cost increases of further delaying the test until the second quarter of fiscal year 2010. Although the Aegis BMD missile—SM-3 Block IA—capability against an intermediate range ballistic missile is not a requirement, MDA has planned for years and invested millions of dollars in a plan to test the Aegis BMD system and SM-3 Block IA interceptor against this type of threat. At the start of fiscal year 2009, Aegis BMD officials intended to conduct this test in the third quarter of fiscal year 2009. However, as of December 2009, MDA had canceled the test and planned to combine several objectives with those in a future flight test in 2013. As of February 2010, the Director of MDA stated that the test is being rescheduled for 2011. Test documentation was not provided for our review, so it remains unclear whether the test will include the original test objectives, target, and BMDS hardware and software configurations. MDA’s new July 2009 test plan was intended to provide stability; however, program officials already anticipate major revisions and alterations. According to MDA officials, budget decisions and the presidential decision to implement a European phased, adaptive approach, drove changes to the test and targets program. For example, the new strategy for European missile defense will primarily utilize Aegis BMD interceptors as opposed to GMD interceptors. Tests in support of developing this capability have not yet been added to the test plan. The Director of MDA stated that his agency is coordinating with the Office of the Director, Operational Test and Evaluation and with the BMDS Operational Test Agency to address these changes. According to the Director of MDA, flight and ground testing to support phases one through four of the Phased Adaptive Approach will be baselined in the March 2010 Integrated Master Test Plan, but the test plan was not available for our review during our audit. One way MDA’s new testing approach was intended to provide stability is that it was structured to slow the spiral development fielding process, allowing the warfighter to gain confidence in the BMDS before fielding decisions are made. However, BMDS Operational Test Agency officials told us that changes to hardware and software configurations need to follow the process jointly agreed to with MDA, noting that changes to the operational baseline should not occur until the appropriate developmental tests and operational tests have been completed. After the adoption of the new test plan through October 2009, MDA continued to incorporate software changes as updates to the operational baseline. According to Operational Test Agency officials, most of the proposed and approved software changes had not been through system-level testing and immediately made future test configurations in the Integrated Master Test Plan invalid. Changes made without full system-level testing, could result in possible adverse effects to the BMDS and the warfighter’s ability to use the system effectively. The BMDS Operational test Agency continues to work with MDA on these issues. BMDS Operational Test Officials told us that they have seen improvements since October 2009, noting that there has been an increase in early coordination and presentation of data to support interim releases of software and hardware. According to these officials, these improvements coupled with the new warfighter and MDA- accepted approach for testing—allowing developmental testing to occur before operational testing and before new capabilities are delivered to the Warfighter—will likely resolve issues encountered with frequent changes to software and hardware. We testified in February 2009 that the success of MDA’s new approach to testing hinges on providing sufficient resources, among other factors. However, these resource challenges continue to affect the test plan because MDA’s new test plan was not fully resourced when it was approved in July 2009. In addition, BMDS Operational Test Agency officials also raised concerns that the Integrated Master Test Plan is not currently resourced to support the necessary personnel to analyze the tests or the performance assessment. Until the new development efforts are fully reflected in the test plan, MDA will also not be able to fully integrate that plan with other key aspects of testing and development, such as the acquisition of targets. The test plan is one of six management baselines MDA uses to track program progress. However, MDA determined that these baselines consist of a disparate set of non-integrated business processes. More importantly, MDA acknowledged that there is inconsistent management, configuration control, integration, and synchronization of existing manual processes. MDA is developing new business tools to automate the integration of these baselines and projects. While it will take several years for the agency to integrate these baselines using those tools and synchronize them with other key testing and development efforts, the initial capability to automatically integrate cost, schedule, and performance baselines will be available in early fiscal year 2011. MDA models and simulations have not matured sufficiently to assess overall BMDS performance and may not fully mature until 2016, instead of 2011as we reported last year. According to the BMDS Operational Test Agency, it could not project which models and simulations could be accredited for Performance Assessment 2009. It expects to make its determination in July 2010 at the earliest. Further, functionality shortfalls diminished the usable scope and integration issues have delayed the execution of Performance Assessment 2009 by at least 6 months. As a result, the BMDS Operational Test Agency did not use the Performance Assessment 2009 data in its 2009 annual operational assessment as it had once intended. According to these officials, because of the known limitations and the changes to the BMDS operational configuration that will occur in 2010, the BMDS Operational Test Agency also will not be able to use the results as part of its 2010 annual operational assessment. MDA officials acknowledged that their primary challenge for the next several years will be obtaining enough flight test data to anchor and accredit the models. Moreover, the BMDS Operational Test Agency is still concerned about the effect on the validation of models due to artificialities in flight tests, particularly for GMD. The BMDS Operational Test Agency believes that the validation of models will improve as artificialities in flight tests are reduced. Another unresolved modeling and simulation weakness in the testing program has been addressing different weather conditions. MDA, in concert with the BMDS Operational Test Agency, is addressing modeling deficiencies with respect to weather conditions, but specific plans to resolve this weakness were not available during our audit. Finally, the BMDS Operational Test Agency anticipates that deficiencies in modeling the BMDS communications system at the regional and theater levels that exist in Performance Assessment 2009 will improve in the subsequent Performance Assessment 2010. In 2008, we assessed MDA’s capability delivery progress against its integrated capability schedule goals and found that many slipped to 2009. We are no longer able to assess MDA’s progress in delivering integrated capabilities because, in fiscal year 2009, the agency eliminated integrated capability delivery goals from its schedule baseline. In its most recent BMDS Accountability Report, MDA redefined its schedule baseline to consist solely of hardware and software deliveries spread across fiscal years. MDA assigned schedule metrics to asset deliveries on an element level only and removed key schedule measures—engagement sequence groups—from its August 2009 BMDS Accountability Report that tracked integrated block capability deliveries and provided a means for assessing the readiness of BMDS capabilities, integration, and functionality. Thus, MDA provided no information about its progress and plans to deliver integrated BMDS capabilities. MDA previously identified its capability delivery schedule goals and baselines within the block structure, in terms of assets and engagement sequence groups made available for fielding in a particular timeframe. Under this capabilities-based five-block acquisition management strategy, some blocks contained schedule baselines for deliveries of significant increments of capabilities against particular threats, culminating in the full capability declaration at a projected date. According to MDA, engagement sequence groups created manageable combinations of system configurations and provided a structure to assess BMDS performance. Because MDA presented early, partial and full capability delivery dates for individual engagement sequence groups, engagement sequence groups served as baseline to measure the schedule of integrated capability deliveries. MDA officials told us that the agency eliminated engagement sequence groups as measures of integrated capability deliveries to address warfighter concerns. According to MDA officials, the warfighter did not assess engagement sequence groups since they were organized in a way that did not align with warfighter operations, tactics, and procedures. During our audit, MDA had not replaced these previously reported integrated capability delivery baselines with new metrics. However, according to the Director of MDA, the agency is working to develop new baselines and schedules from which progress can be measured. In addition, agency officials told us that MDA is transitioning to an incremental BMDS capability delivery concept. However, MDA did not provide a definition of incremental BMDS capability deliveries or define them as schedule goals in the August 2009 BMDS Accountability Report. MDA also did not identify anticipated delivery dates for its performance metrics; however, the Director of MDA stated that developmental baselines are anticipated to be developed, reviewed and approved by the third quarter of fiscal year 2010. Furthermore, major MDA documents designed to communicate MDA’s BMDS schedule are not synchronized. Although MDA officials told us that they have recently synchronized the Integrated Master Schedule with the Integrated Master Test Plan, the two documents’ schedule still does not correspond to the BMDS Master Plan. The Integrated Master Test Plan will be revised in February 2010, rendering all three documents again unsynchronized with MDA’s acquisition strategy and programmatic decisions. While it has eliminated its externally reported integrated capability declaration goals, MDA continues to internally track capability declarations for at least two of its assets—the Sea-based X-band radar and the Shariki AN/TPY-2 radar—whose capability declarations slipped again in fiscal year 2009. The Sea-based X-band radar partial capability declaration appears to have slipped from fiscal year 2009 to fiscal year 2010, while full capability will be declared with less knowledge than initially planned. According to MDA officials, the agency was planning for a partial capability declaration in June 2009, following successful execution of four test events—GTI-03, FTX-03, FTG-05, and GTD-03—and analysis. However, these events slipped over the course of the year, and according to MDA, the partial capability declaration was delayed to fiscal year 2010. According to the Director of MDA, the capability declaration is currently planned to occur after analysis can include both FTG-06 and a test—CD-03—planned for September 2010. It remains unclear what effect the problems encountered in FTG-06 will have on the declaration decision. The Shariki radar was designated by MDA to reach a full capability declaration by December 2008, but that was subsequently delayed to July 2009. The radar was to undergo the military mission capability assessment, in which the warfighter verifies the radar’s readiness for full operational use by the services in the context of the present BMDS architecture. To date, the full capability declaration has not been made. Consequently, the date for the full mission capability has not been determined. Furthermore, as with the Sea-based X-band radar, the decision has not been made as to whether the Shariki radar capability declaration process will continue under the original plan or migrate to the new approach. Despite testing delays, developmental problems, and the continued inability to complete the Director’s test-related knowledge points, MDA proceeded with manufacturing, production, and fielding of BMDS assets prior to operational testing and evaluation. The Aegis BMD program intends to execute a contract modification in the second quarter of 2010 to acquire 18 operationally configured SM-3 Block IB missiles, used for testing and fielding. These 18 SM-3 Block IB missiles were originally justified in the fiscal year 2010 budget request as needed for flight testing and for delivery to the fleet as operational assets. According to MDA’s September 2009 SM-3 Block IB utilization plan, 2 missiles are to be used for flight tests, 10 are to be used for fleet deployment, and 6 are to be used for either fleet proficiency training or fleet deployment. However, MDA is proceeding with the contract modification even though flight testing of a fully integrated prototype for this missile type in an operational environment will not have occurred. The first flight test—FTM-16—that could demonstrate some performance of the missile is currently scheduled for the third quarter of fiscal year 2011. In addition, the program is still maturing several critical technologies, such as the throttleable divert and attitude control system, and developmental testing of these technologies will not be complete until after the manufacturing decision for these 18 missiles. The manufacturing decision is also scheduled to occur almost a year before the manufacturing readiness review—currently scheduled for the second quarter of fiscal year 2011. Consequently, approval for production of this missile is scheduled before the results of developmental testing to demonstrate that the technologies and design are fully mature, before the first flight test demonstrates the system functions as intended, and before the readiness to begin manufacturing has been assessed—all of which increases the risk of costly design changes while production is underway. The Director of MDA and the Assistant Secretary of the Navy for Research, Development and Acquisition approved a developmental baseline in January 2010 that set production criteria and projected an initial production decision for 74 SM-3 Block IB missiles in the third quarter of fiscal year 2011. GMD continues to manufacture and field the Capability Enhancement II exoatmospheric kill vehicle prior to having it verified through operationally realistic flight testing. In March 2009, we reported that MDA had planned to conduct an intercept test to assess Capability Enhancement II exoatmospheric kill vehicle in the first quarter of fiscal year 2008—months before emplacing any interceptors with this configuration. However, developmental problems with the new configuration’s inertial measurement unit and problems with the target delayed the first flight test with the Capability Enhancement II configuration—FTG-06—until the fourth quarter of fiscal year 2009. This test was again delayed because of modeling uncertainties with the target and failures experienced with the Sea-Based X-Band radar during testing. GMD officials stated that they do not plan to adjust deliveries of the Capability Enhancement II exoatmospheric kill vehicle because of the test delay. However, MDA officials told us that they will not add Capability Enhancement II to the operational baseline until after FTG-06 has been conducted. As previously noted, FTG-06 was conducted in January 2010 but was unsuccessful. According to the July 2009 revised Integrated Master Test Plan, the next planned intercept test with a similar configuration as FTG-06—a three-stage booster and a Capability Enhancement II exoatmospheric kill vehicle—is not scheduled to take place until at least fourth quarter fiscal year 2012. If MDA delivers Capability Enhancement II exoatmospheric kill vehicle units as currently scheduled, it will have delivered all of the Capability Enhancement II exoatmospheric kill vehicles that are currently under contract before the test is conducted. MDA’s concurrent approach to developing and fielding assets has led to concerns about the performance of some fielded assets. In March 2009, we reported that MDA had initiated a refurbishment program in 2007 to replace questionable parts and that some improvements had already been introduced into the manufacturing flow. However, according to program officials, they discovered additional problems during early refurbishments causing the program to expand its effort. Additionally, as MDA continues to manufacture ground based interceptors, it is discovering additional process and design issues, and the corrective actions are being incorporated into the refurbishment program. The program has three categories for refurbishments—minimal, moderate, and extensive—with the cost of each vehicle going through refurbishment varying from vehicle to vehicle. MDA originally estimated that the cost for extensive refurbishment of an individual interceptor could reach as high as $24 million. MDA continues to face challenges with transparency, accountability, and oversight controls and mechanisms. In establishing MDA in 2002, the Secretary of Defense directed the agency to develop the BMDS as a single program using a capabilities-based, spiral upgrade approach to quickly deliver a set of integrated defensive capabilities. To accomplish this mission, MDA was granted exceptional flexibility in setting requirements and managing the acquisition. This flexibility allowed MDA to begin delivering an initial defensive capability in 2004, but at the expense of transparency and accountability. Since our first MDA report in 2004, we have repeatedly found that MDA’s approach for building its cost, schedule, and performance goals hindered transparency and limited accountability of the BMDS development effort. Specifically in April 2004, we reported that MDA’s goals did not provide a reliable and complete baseline for accountability purposes and decision making because these goals varied year to year, did not include all associated costs, and were based on assumptions about performance that were not explicitly stated. These conclusions still hold true for several aspects of the BMDS acquisition strategy. For example, MDA’s goals change continuously, cost baselines have yet to be established, and some details regarding performance goals are still not explicitly stated. Since 2004, we have also made recommendations to develop baselines and report variances to those baselines to promote a higher level of transparency and accountability for the agency; to adjust its block strategy to ensure that it was knowledge-based and aligned with agency goals; and to strengthen oversight by, for example, having the Missile Defense Executive Board (MDEB) consider the extent to which MDA could adapt and adopt aspects of DOD’s standard acquisition policies to enhance oversight. Members of Congress have also expressed concerns regarding the block strategy, acquisition management strategy, accountability, and oversight of MDA. For example, in 2007, the House Appropriations Committee directed MDA to “develop a system-wide plan to report according to the spirit of existing acquisition laws to improve accountability and transparency of its program.” More recently, in the National Defense Authorization Act for Fiscal Year 2008, Congress required MDA to establish acquisition cost, schedule, and performance baselines for each system element that has entered the equivalent of the systems development and demonstration phase of acquisition or is being produced or acquired for operational fielding. MDA is not yet fully compliant with this requirement. However, officials indicated that they are working toward fulfilling this requirement, but the expected date for full compliance was unknown at the time of our audit. While MDA has committed to take actions to address concerns about accountability and transparency, it has made limited progress in implementation, as shown in table 6. MDA’s termination of its capabilities-based block approach in June 2009 marked the third acquisition management strategy for the BMDS in the last 3 years and effectively reduced transparency and accountability for the agency. As previously noted, MDA has organized the development of the BMDS using two different block approaches in the past—(1) sequential 2- year blocks of BMDS-wide integrated capabilities and (2) five capabilities- based blocks of different MDA elements against particular threats. Changing the block structure is problematic because each time the block structure is changed, the connection is obscured between the old block structure’s scope and resources and the new block structure’s rearranged scope and resources. This makes it difficult for decision makers to hold MDA accountable for expected outcomes and clouds transparency of the agency’s efforts. In March 2008, we reported that the agency’s capabilities-based block approach had begun to provide improvements to transparency and accountability, but as we recommended, transparency and accountability could have been further improved with MDA’s development and reporting of full acquisition cost estimates as well as independent verification of those costs. Although key controls and mechanisms needed to establish a sound acquisition process for MDA are still lacking, MDA has initiatives underway that could improve the transparency, accountability, and oversight of the acquisition of the BMDS. In June 2009, the MDA Director testified before the Senate Armed Services Committee that MDA is responding to the Weapon System Acquisition Reform Act of 2009 through the establishment of acquisition milestone decisions. These decisions are designed to ensure appropriate competitive acquisition strategies. He further noted that as the Acquisition Executive for the initial phases of missile defense, he is implementing milestone review and baseline reporting processes that are closely aligned with the principles of DOD’s acquisition policies, commonly referred to as the DOD 5000 series. He further noted that he recognized the need to incorporate the tenets of the DOD 5000 series to ensure that programs are affordable, are justified by the warfighter, and demonstrate acceptable risk through a milestone review process overseen by the MDEB. He also stated that MDA intends to separate the management of its technology and development programs. The Director testified that under his authority, potential programs that may provide technological or material solutions for MDA will undergo a Milestone “A” decision to determine if they should become programs. These technology-based programs will be managed by knowledge points and incubated until maturity, at which time MDA along with the service acquisition executive will be able to make a Milestone “B” decision as to whether the program should be converted to a development program. He explained that the Under Secretary of Defense for Acquisition, Technology and Logistics will make Milestone “C” production decisions regarding the programs. We were only able to obtain limited insight into these initiatives because the agency only determined how they will be implemented at the end of our audit and was just beginning to implement them. In regards to the milestone decisions, the Director of MDA indicated that the agency is undertaking a baseline phase review process. The agency is transitioning to managing the six developmental baselines at the project element level. These baselines will be approved in developmental baseline reviews and managed through quarterly performance element reviews. MDA has identified three phases of development where baselines are approved— technology development, product development, and initial production phases—which may ensure that the appropriate level of knowledge is obtained before acquisitions move from one phase to the next. Approval of the product development and initial production baselines will be jointly reviewed by the Director of MDA and the respective service acquisition executive. In addition, while our draft was being reviewed by MDA, the Director of MDA provided us with initial information regarding the definition of these new phases and the process for establishing cost, schedule, or performance baselines. Based on our initial briefing on MDA’s new process, it may include many of the necessary elements of a sound business case—such as establishing top-level cost, schedule, and performance measures that are available internally and externally for oversight. Although we were unable to fully evaluate MDA’s new initiatives, these initiatives do offer an opportunity for the agency to increase transparency and accountability if they are implemented in accordance with knowledge- based acquisition principles, leading to the establishment of sound business cases and realistic baselines. Over the past 10 years, we have conducted extensive research on successful programs and have found that successful defense programs ensure that their acquisitions begin with realistic plans and baselines prior to the start of development. We have previously reported that the key cause of poor weapon system outcomes, at the program level, is the consistent lack of disciplined analysis that would provide an understanding of what it would take to field a weapon system before system development begins. We have reported that there is a clear set of prerequisites that must be met by each program’s acquisition strategy to realize successful outcomes. These prerequisites include the following: Establishing a clear, knowledge-based, executable business case for the product. An executable business case is one that provides demonstrated evidence that (1) the identified needs are real and necessary and can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources—including technologies, funding, time, and management capacity. Knowledge-based acquisition principles and business cases combined are necessary to establish realistic cost, schedule and performance baselines. Without documented realistic baselines there is no foundation to accurately measure program progress. Separating technology development activities from product development activities. As noted earlier, the Director of MDA plans to separate technology development and product development for the BMDS. The process of developing technology culminates in discovery—the gathering of knowledge—and must, by its nature, allow room for unexpected results and delays. When immature technologies are brought onto the critical path of product development programs too early, they often cause long delays in an environment where large workforces must be employed; complex tools, plants, and facilities must be operated; long and expensive supplier networks must be paid; and the product itself must sometimes be redesigned once the final form of the technologies is known. Ensuring that only mature technologies are brought into product development is a key step for successful programs. Employing early systems engineering discipline in order to develop realistic cost and schedule estimates prior to development start. Early systems engineering provides the knowledge a product developer needs to identify and resolve performance and resource gaps before product development begins, either by reducing requirements, deferring them to the future, or increasing the estimated cost for the weapon system’s development. Requirements that are too risky given the state of technology and design should not be allowed into this expensive environment. MDA’s Director noted that he has taken steps to enhance systems engineering by designating a senior executive position to establish engineering policy, ensure the disciplined practice of systems engineering fundamentals, and develop the systems engineering competencies of the missile defense workforce; creating knowledge centers; and increasing the number of recent engineering school graduates. While these initiatives hold promise for the future, they could provide further enhancements if they are used as the foundation to develop realistic cost and schedule estimates for the BMDS. These practices could address MDA’s past problems of initiating programs and beginning system development based on limited systems engineering knowledge. These programs depended on critical technologies that were immature and not ready for product development or production. The Director of MDA acknowledged the importance of changing MDA’s acquisition approach to adopt knowledge-based acquisition processes. In order to respond to a presidential directive to deliver a missile defense capability in a rapid manner, MDA has been given unprecedented funding and decision-making flexibility. This flexibility has allowed concurrent development, testing, manufacturing and fielding and enabled MDA to quickly develop and field the first increment of capability in 2005. However, while this approach has expedited the fielding of assets, it also resulted in less transparency and accountability than is normally present in a major weapon program. Since the program’s inception, MDA’s lack of baselines and its management of the BMDS with high levels of uncertainty about requirements and program cost estimates effectively set the missile defense program on a path to an undefined destination at an unknown cost. Across the agency, these practices left programs with limited knowledge and few opportunities for crucial management oversight and decision making concerning the agency’s investment and the warfighter’s continuing needs. At the program level, these practices contributed to quality problems affecting targets acquisitions, which in turn, hampered MDA’s ability to conduct tests as planned. As MDA transitions to new leadership, a new acquisition strategy, a new test strategy, and a shift in emphasis toward early intercept capabilities, the agency has an opportunity to chart a course that enables transparency and accountability as well as flexibility, and it appears committed to doing so. Importantly, the Director of MDA has begun new initiatives in accordance with guiding principles of DOD’s acquisition policies, which already embrace knowledge-based practices and sound management controls. The Director of MDA intends to apply these new policies to each element or appropriate portions of the elements, as is currently done across DOD, in order to provide a foundation for the Congress and others to assess progress and hold senior leadership accountable for outcomes. These initial steps are promising, but it will take time to fully implement them and once implemented they will need to be sustained and the tools consistently used in order to establish accountability. If this is done effectively, with baselines set at a program level, MDA can respond to strategic changes affecting the overall configuration of the system without losing basic knowledge about cost, schedule, and performance. Such actions do not have to result in a slower or more burdensome acquisition process. In the past, weapon programs often rushed into systems development before they were ready, in part because DOD’s acquisition process did not require early formal milestone reviews and programs would rarely be terminated once underway. Over time, in fact, these changes could help programs replace risk with knowledge, thereby increasing the chances of developing weapon systems within cost and schedule targets while meeting user needs. As MDA implements its initiatives to improve transparency, accountability, and oversight, and begins efforts to manage and oversee MDA at the element level, we recommend that the Secretary of Defense direct MDA to take the following eight actions: Establish cost, schedule, and performance baselines for the acquisition of each new class of target when it is approved by the Director prior to proceeding with acquisition and report those baselines to Congress. Obtain independent Cost Assessment and Program Evaluation cost estimates in support of these cost baselines. Ensure that program acquisition unit costs for BMDS assets are reported in the BMDS Accountability Report, to provide Congress with more complete and comprehensive information by including development costs. Update DOD’s Plan to Enhance the Accountability and Transparency of the Ballistic Missile Defense Program to reflect MDA’s current initiatives and include dates for fulfilling each commitment. Report top-level test goals for each element, or appropriate portions thereof, to Congress in the next BMDS Accountability Report. Develop and report to Congress in the annual BMDS Accountability Report a measure for schedule baseline goals that incorporates delivering integrated capabilities to the warfighter. Develop and report to Congress in the annual BMDS Accountability Report the dates at which performance baselines will be achieved. Report to Congress variances against all established baselines. Several of these actions, such as establishing cost, schedule, and performance baselines, have been recommended in prior GAO reports or addressed in legislation. This report, however, restates these recommendations in the context of changes made to the missile defense program, for example, the deletion of the block structure and increased focus on elements. We further recommend that the Secretary of Defense direct MDA to take the following two actions: Delay the manufacturing decision for SM-3 Block IB missiles intended for delivery to the fleet as operational assets until after (1) the critical technologies have completed developmental testing, (2) a successful first flight test demonstrates that the system functions as intended, and (3) the successful conclusion of the manufacturing readiness review. Ensure that developmental hardware and software changes are not made to the operational baseline that disrupt the assessments needed to understand the capabilities and limitations of new BMDS developments. DOD provided written comments on a draft of this report. These comments are reprinted in appendix I. DOD also provided technical comments, which were incorporated as appropriate. DOD fully concurred with 9 of our 10 recommendations, including our recommendation to establish cost, schedule, and performance baselines for the acquisition of each new class of target when it is approved by the MDA Director prior to proceeding with acquisition and report those baselines to Congress. In response to our recommendation, DOD commented that MDA has already established and the Director has approved cost, schedule, and performance baselines for the acquisition of each new class of target. The department noted that these baselines are contained in multiple documents and will be brought together in a Target Program Baseline prior to contract award. However, MDA should ensure that the Target Program Baseline establishes top-level cost, schedule, and performance baseline measures similar to approved program baselines that are established for DOD’s major defense acquisitions and available for internal and external oversight. It is unclear whether MDA will make its Target Program Baseline available internally for oversight and report it to Congress as we recommended. DOD partially concurred with our recommendation that the Secretary of Defense direct MDA to delay the manufacturing decision for SM-3 Block IB missiles intended for delivery to the fleet as operational assets until after (1) the critical technologies have completed developmental testing, (2) a successful first flight test demonstrates that the system functions as intended, and (3) the successful conclusion of the manufacturing readiness review. In response to this recommendation, DOD stated that manufacturing of SM-3 Block IB missiles to support testing is under way, but the production decision for SM-3 Block IB missiles used for fleet operation is planned to occur after criteria listed in our recommendation have been met. However, during our review, we found that the 18 SM-3 Block IB missiles in question were originally justified in the fiscal year 2010 budget request as needed for “flight testing and for delivery to the fleet as operational assets.” In addition, Aegis BMD Program Office responses related to this matter indicate that these missiles will be used operationally if a security situation requires it. Furthermore, according to MDA’s September 2009 SM-3 Block IB utilization plan briefed and approved by the MDA Acquisition Strategy Board, only 2 of these missiles are specifically designated for flight tests, while 10 are to be used for fleet deployment and 6 are to be used for either fleet proficiency training or fleet deployment. Based on this information, the contract modification to acquire these 18 SM-3 Block IB missiles will take place before the critical technologies are fully matured at the conclusion of FTM-16—the first SM-3 Block IB end-to-end flight test of a fully integrated, production- representative prototype. Thus, we maintain that approval for manufacturing of these 18 SM-3 Block IB missiles—the majority of which will be deployed to the fleet—is scheduled to occur before the results of developmental testing to demonstrate that the technologies and design are fully mature, before the first flight test demonstrates the system functions as intended, and before the readiness to begin manufacturing has been assessed—all of which increase the risk of costly design changes and retrofit. We are sending copies of this report to the Secretary of Defense and to the Director of MDA. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine the progress that eight Missile Defense Agency (MDA) elements have made during fiscal year 2009 toward schedule, testing, and performance baselines, we developed data collection instruments that were completed by each element’s program office. These instruments collected detailed information on prime contracts, design reviews, test schedules and results, element performance, noteworthy progress, lessons learned, and challenges facing the elements during the fiscal year. In addition, we reviewed individual element Program Execution Reviews, test plans and reports, production plans, staffer day briefings, and other requirements documents. We held interviews with officials in each element’s program office and followed up on the information we received with MDA’s Agency Operations Office; the Department of Defense’s (DOD) Office of the Director, Operational Test and Evaluation; and MDA’s Ballistic Missile Defense System (BMDS) Operational Test Agency. To further review individual element and BMDS-level performance progress during the fiscal year, we met with officials in MDA’s Modeling and Simulation Directorate at the Missile Defense Integration and Operations Center, individual element program offices, and MDA’s BMDS Operational Test Agency to discuss modeling and simulations plans and procedures as well as other performance metrics. We also reviewed DOD and MDA policies, memos, and flight test plans related to modeling and simulations. In addition, we reviewed various elements’ verification, validation, and accreditation plans, MDA performance briefings, and validation, verification, and accreditation plans for MDA’s BMDS Performance Assessment 2009. We assessed MDA’s testing and target development progress by reviewing MDA’s Integrated Master Test Plans, Integrated Master Schedule, target acquisition plan, and target business case analysis. In addition, we met with officials in the Targets and Countermeasures Program Office to obtain information on MDA’s acquisition management strategy including plans for cost, schedule, and testing. We also met with MDA’s testing directorate, MDA’s BMDS Operational Test Agency, and DOD’s Office of the Director of Test and Evaluation to discuss the progress, challenges, and lessons learned during fiscal year 2009 testing. To analyze MDA’s changing acquisition approach and the agency’s progress in addressing issues related to transparency, accountability, and oversight, we interviewed officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; MDA’s Agency Operations Directorate; MDA’s Advanced Technology Directorate; and MDA’s Office of Quality, Safety, and Mission Assurance Directorate. We also reviewed various MDA statements and documents related to MDA’s block structure. We reviewed DOD acquisition system policy and various DOD directives to gain insight into other DOD systems’ accountability and oversight mechanisms. We also analyzed MDA’s acquisition directives and Missile Defense Executive Board briefings to examine MDA’s current level of oversight. In addition, we reviewed MDA budget estimate submission justifications, Integrated Master Test Plans, the Ballistic Missile Defense Master Plan, the BMDS Accountability Report, and prior reports that outlined the agency’s baselines and goals. Our work was performed both at MDA headquarters in Arlington, Virginia and at various program offices located in Huntsville, Alabama. In Arlington we met with officials from the Aegis Ballistic Missile Defense Program Office; Airborne Laser Program Office; Command, Control, Battle Management, and Communications (C2BMC) Program Office; MDA’s Agency Operations Office; DOD’s Office of the Director, Operational Test and Evaluation; and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. In Huntsville, Alabama we interviewed officials from the Ground-based Midcourse Defense (GMD) Program Office, the Sensors Program Office, the Terminal High Altitude Area Defense Project Office, the Targets and Countermeasures Program Office, the Advanced Technology Directorate, and the Office of the Director for BMDS Tests. We met with officials from the Missile Defense Integration and Operations Center at Schriever Air Force Base in Colorado Springs, Colorado, to discuss the C2BMC and Space Tracking and Surveillance System elements as well as to receive further information on MDA’s models and simulations. Additionally, we interviewed Raytheon officials in Tucson, Arizona, to discuss the Kinetic Energy Interceptor, GMD, and Aegis BMD elements’ status. In December 2007, the conference report accompanying the National Defense Authorization Act for Fiscal Year 2008 noted the importance of DOD and MDA providing information to GAO in a timely and responsive manner to facilitate the review of ballistic missile defense programs. During the course this audit, we experienced significant delays in obtaining information from MDA. During the audit, MDA did not always provide GAO staff with expeditious access to requested documents and articles of information, which delayed some audit analysis and contributed to extra staff hours. Of the documents and information we requested, we received approximately 24 percent within the 10 to15 business day protocols that were agreed upon with MDA. Pre-existing documentation took MDA on average about 28 business days to provide and many pre- existing documents took 40 business days or more to be provided to GAO. Notwithstanding these delays, we were able to obtain the information needed to satisfy our objectives in accordance with generally accepted government auditing standards. We conducted this performance audit from April 2009 to February 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David Best, Assistant Director; LaTonya Miller; Ivy Hübler; Tom Mahalek; Steven Stern; Meredith Allen Kimmett; Wiktor Niewiadomski; Kenneth E. Patton; Karen Richey; Robert Swierczek; and Alyssa Weir made key contributions to this report.
By law, GAO is directed to assess the annual progress the Missile Defense Agency (MDA) made in developing and fielding the Ballistic Missile Defense System (BMDS). GAO also assessed MDA's progress toward improving accountability and transparency in agency operations, management processes, and its acquisition strategy. To accomplish this, GAO reviewed asset fielding schedules, test plans and reports, as well as pertinent sections of Department of Defense (DOD) policy to compare MDA's current level of accountability with that of other DOD programs. GAO's fiscal year 2009 assessment of MDA's cost, schedule, and performance progress is more limited than previous assessments because MDA removed key components of schedule and performance goals from its annual report of goals. In addition, though it had committed to, MDA did not report total cost estimates in 2009. Fiscal year 2009 was an unprecedented year of transition for MDA as it experienced its first change of administration, its third MDA Director, shifts in plans for missile defense in Europe as well as a shift in focus for technology development from intercepting missiles during the boost phase to the early intercept phase. Such changes present new challenges for MDA but also opportunities to strengthen acquisition approaches. (1) Progress: MDA achieved several accomplishments. For example, MDA revised its testing approach to better align tests with modeling and simulation needs and undertook a new targets development effort to resolve longstanding problems supplying sufficient and reliable targets. The agency also demonstrated increased levels of performance for some elements through flight and ground testing. Fiscal year 2009 testing indicates an increased level of interoperability among multiple elements, improving both system-level performance and advancing the BMDS models and simulations needed to predict performance. In addition, the agency delivered 83 percent of the assets it planned to deliver by the end of fiscal year 2009. (2) Challenges: While there was progress, all BMDS elements had delays in conducting tests, were unable to accomplish all planned objectives, and experienced performance challenges. Poor target performance continued to be a problem, causing several test delays and leaving several test objectives unfulfilled. The test problems also precluded MDA from gathering key knowledge and affected development of advanced algorithms and homeland defense. These test problems continued to affect the models and simulations used to assess the overall performance of the BMDS. Consequently, comprehensive assessments of its capabilities and limitations are still not possible. MDA also redefined its schedule baseline, eliminating goals for delivering integrated capabilities so we were not able to assess progress in this area. Despite these problems, MDA proceeded with production and fielding of assets. (3) Transparency, Accountability, and Oversight: In 2009, the significant adjustments MDA made to its acquisition approach--terminating the block structure; reducing, eliminating, or not reporting key baselines; and terminating its capability declaration process--and adjustments to the material reported to Congress reduced the transparency and accountability MDA had begun to build. However, MDA is beginning to implement several initiatives--including the adoption of key principles of DOD acquisition regulations--that could improve transparency and accountability and lay the foundation needed for oversight. If these initiatives are implemented in accordance with knowledge-based acquisition principles, an opportunity exits to improve the BMDS acquisition by ensuring MDA programs begin with realistic, transparent plans and baselines. While these initial steps hold promise, they will take time to fully implement and once implemented they will need to be sustained over time and consistently applied.
Roughly half of all workers participate in an employer-sponsored retirement, or pension plan. Private sector pension plans are classified as either defined benefit or defined contribution plans. Defined benefit plans promise to provide, generally, a fixed level of monthly retirement income that is based on salary, years of service, and age at retirement regardless of how the plan’s investments perform. In contrast, benefits from defined contribution plans are based on the contributions to and the performance of the investments in individual accounts, which may fluctuate in value. Examples of defined contribution plans include 401(k) profit-sharing and thrift-savings plans, stock bonus plans, and annuity plans. Labor’s Employee Benefits Security Administration (EBSA) oversees 401(k) plans—including the fees associated with running the plans— because they are considered employee benefit plans under ERISA. Enacted before 401(k) plans came into wide use, ERISA establishes the responsibilities of employee benefit plan decision makers and the requirements for disclosing and reporting plan fees. Typically, the plan sponsor is a fiduciary. A plan fiduciary includes a person who has discretionary control or authority over the management or administration of the plan, including the plan’s assets. ERISA requires that plan sponsors responsible for managing employee benefit plans carry out their responsibilities prudently and do so solely in the interest of the plans’ participants and beneficiaries. The law also provides Labor with oversight authority over pension plans. However, the specific investment products commonly contained in pension plans—such as company stock, mutual funds, collective investment funds, and group annuity contracts—fall under the authority of the applicable securities, banking, or insurance regulators: The Securities and Exchange Commission (SEC), among other responsibilities, regulates registered securities including company stock and mutual funds under securities law. The federal agencies charged with oversight of banks—primarily the Federal Reserve Board, the Treasury Department’s Office of the Comptroller of the Currency, and the Federal Deposit Insurance Corporation—regulate bank investment products, such as collective investment funds. State agencies generally regulate insurance products, such as variable annuity contracts. Such investment products may also include one or more insurance elements, which are not present in other investment options. Generally, these elements include an annuity feature, interest and expense guarantees, and any death benefit provided during the term of the contract. The number of defined contribution plans has increased since 1985, while the number of defined benefit plans has declined dramatically. Figure 1 shows the growth of defined contribution plans relative to that of defined benefit plans from 1985 to 2005. In 2005, more workers were covered by defined contribution plans than by defined benefit plans. In 1985, defined benefit plans covered approximately 29 million active participants, compared to 33 million active participants in defined contribution plans. By 2005, the difference in the numbers had become more pronounced, with roughly 21 million active participants covered by defined benefit plans and approximately 55 million active participants in defined contribution plans. Figure 2 shows the shift in active participants from defined benefit to defined contribution plans since 1985. With the growth in plans and participants, the majority of private pension plan assets are now held in defined contribution plans. As shown in figure 3, defined benefit plan assets decreased from $2.0 trillion in constant 2006 dollars, or about 66 percent of total private pension assets, in 1985 to $1.5 trillion, or just over 40 percent of the total, in 2005. Similarly, the number of 401(k) plans grew from less than 30,000 in 1985, or less than 7 percent of all defined contribution plans, to an estimated 417,000 plans, or about 95 percent of all defined contribution plans in 2005. During this same time period, the number of active participants in 401(k) plans increased from 10 million to 47 million, and plan assets increased from $270 billion to about $2.5 trillion in constant 2006 dollars. Based on industry estimates, equity funds accounted for nearly half of the 401(k) plan assets at the close of 2005. Equity funds are investment options that invest primarily in stocks, such as mutual funds, bank collective funds, life insurance separate accounts, and certain pooled investment products (see fig. 4). Other plan assets were invested in company stock; stable value funds, including guaranteed investment contracts; balanced funds; bond funds; and money funds. Several of these options can be held in mutual funds, which in total represent about 51 percent of 401(k) plan assets. Common plan features, like the type and number of investment options provided to participants in their 401(k) plans, is a topic being studied under other GAO work on plan sponsor practices requested by this committee. With the growth in 401(k) plans, more workers now bear greater responsibility for funding their retirement income. According to the most recent data from Labor, the majority of 401(k) plans are participant- directed, meaning that a participant makes investment decisions about his or her own retirement plan contributions. In 2003, about 88 percent of all 401(k) plans—covering 93 percent of all active 401(k) plan participants and 92 percent of all 401(k) plan assets—generally allowed participants to choose how much to invest, within federal limits, and to select from a menu of diversified investment options selected by the employer sponsoring the plan. While some participants have account balances of greater than $100,000, most have much smaller balances. Based on industry estimates for 2005, 37 percent of participants had balances of less than $10,000, while 16 percent had balances greater than $100,000. The median account balance was $19,328, while the average account balance was $58,328. Participants’ account balances also include any contributions employers make on their behalf. Fees are charged by the various outside companies that the plan sponsor—often the employer offering the 401(k) plan—hires to provide a number of services necessary to operate the plan. Services can include investment management (i.e., selecting and managing the securities included in a mutual fund); consulting and providing financial advice (i.e., selecting vendors for investment options or other services); record keeping (i.e., tracking individual account contributions); custodial or trustee services for plan assets (i.e., holding the plan assets in a bank); and telephone or Web-based customer services for participants. Generally there are two ways to provide services: “bundled” (the sponsor hires one company that provides the full range of services directly or through subcontracts) and “unbundled” (the sponsor uses a combination of service providers). Fees are one of many factors—such as the historical performance and investment risk for each plan option—participants should consider when investing in a 401(k) plan because fees can significantly decrease retirement savings over the course of a career. As participants accrue earnings on their investments, they pay a number of fees, including expenses, commissions, or other charges associated with operating a 401(k) plan. Over the course of the employee’s career, for example, a 1 percentage point difference in fees can significantly reduce the amount of money saved for retirement. Figure 5 assumes an employee who is 45 years of age with 20 years until retirement changes employers and leaves $20,000 in a 401(k) account until retirement. If the average annual net return is 6.5 percent—a 7 percent investment return minus a 0.5 percent charge for fees—the $20,000 will grow to about $70,500 at retirement. However, if fees are instead 1.5 percent annually, the average net return is reduced to 5.5 percent, and the $20,000 will grow to only about $58,400. The additional 1 percent annual charge for fees would reduce the account balance at retirement by about 17 percent. Various fees are associated with 401(k) plans, but investment and record- keeping fees account for most 401(k) plan fees. However, inadequate disclosure and reporting requirements may leave participants and Labor without important information on these fees. The information on fees that plan sponsors are required to disclose to participants does not allow participants to easily compare the fees for the investment options in their 401(k) plan. In addition, Labor does not have the information it needs to oversee fees and identify questionable 401(k) business practices. Labor has several initiatives under way to improve the information it has on fees and the various business arrangements among service providers. Investment fees account for the largest portion of total fees regardless of plan size, as figure 6 illustrates. Investment fees are, for example, fees charged by companies that manage a mutual fund for all services related to operating the fund. These fees pay for selecting a mutual fund’s portfolio of securities and managing the fund; marketing the fund and compensating brokers who sell the fund; and providing other shareholder services, such as distributing the fund prospectus. Plan record-keeping fees generally constitute the second-largest portion of plan fees. Plan record-keeping fees are usually charged by the service provider to set up and maintain the 401(k) plan. These fees cover activities such as enrolling plan participants, processing participant fund selections, preparing and mailing account statements, and other related administrative activities. Unlike investment fees, plan record-keeping fees apply to the entire 401(k) plan rather than the individual investment options. As shown in figure 7, these fees make up a smaller proportion of total plan fees in larger plans, indicating economies of scale. There are a number of other fees associated with establishing and maintaining a plan, such as fees to communicate basic information about the plan to participants. However, these fees generally constitute a much smaller percentage of total plan fees than investment and plan record- keeping fees. Whether and how participants or plan sponsors pay these fees varies by the type of fee and the size of the 401(k) plan. Investment fees, which are usually charged as a fixed percentage of assets and deducted from investment returns, are typically borne by participants. Plan record- keeping fees are charged as a percentage of a participant’s assets, a flat fee, or a combination of both. Although plan sponsors pay these fees in a considerable number of plans, they are increasingly being paid by participants. ERISA requires that plan sponsors provide all participants with a summary plan description, account statements, and the summary annual report, but these documents are not required to disclose information on fees borne by individual participants. Table 1 provides an overview of each of these disclosure documents, and the type of fee information they may contain. ERISA also requires 401(k) plan sponsors that have elected liability protection from participants’ investment decisions to provide additional fee information. Most 401(k) plan sponsors elect this protection and therefore must provide, among other information, a description of the investment risk and historical performance of each investment option available in the plan and any associated transaction fees for buying or selling shares in these options. Upon request, these plans must also provide participants with the expense ratio—a fund’s operating fees as a percentage of its assets—for each investment option. Plan sponsors may voluntarily provide participants with more information on fees than ERISA requires. For example, plans may distribute prospectuses or fund profiles for individual investment options in the plan. Although not required, plan sponsors may provide record-keeping or other information on fees in participants’ account statements. Although participants are responsible for directing their investments in the plan, they may not be aware of the different fees that they pay. In a nationwide survey, more than 80 percent of 401(k) participants report not knowing how much they pay in fees. Some industry professionals said that making participants who direct their investments more aware of fees would help them make more informed investment decisions. Participants may not have a clear picture of the total fees they pay because plan sponsors provide this information in a piecemeal fashion. Some documents that contain fee information are provided to participants automatically, such as annually or within 90 days of joining the plan, while others, such as prospectuses, may require that participants seek them out. Furthermore, the documents that participants receive do not provide a simple way for participants to compare fees among the investment options in their 401(k) plan. Industry professionals suggested that comparing the expense ratio across investment options is the most effective way to compare fees within a 401(k) plan. The expense ratio is useful because it includes investment fees, which account for most of the fees participants pay, and is generally the only fee measure that varies by option. However, as noted above, not all plan sponsors are required to provide expense ratios to participants. Labor has authority under ERISA to oversee 401(k) plan fees and certain types of business arrangements involving service providers, but lacks the information it needs to provide effective oversight. Under ERISA, Labor is responsible for enforcing the requirements that plan sponsors (1) ensure that fees paid with plan assets are reasonable and for necessary services; (2) be prudent and diversify the plan’s investments or, if plan sponsors elect liability protection, provide a broad range of investment choices for participants; and (3) report information known on certain business arrangements involving service providers. Labor does this in a number of ways, including collecting some information on fees from plan sponsors, investigating participants’ complaints or referrals from other agencies on questionable 401(k) plan practices, and conducting outreach to educate plan sponsors about their responsibilities. However, the information plan sponsors are required to report to Labor is limited, and the lack of information hinders the agency’s ability to effectively oversee fees. Many of the fees are associated with the individual investment options in the 401(k) plan, such as a mutual fund; they are deducted from investment returns and not included on the annual reporting form plan sponsors submit to Labor, Form 5500. As a result, the Form 5500 does not include the largest type of fee, even though plan sponsors receive this information from the mutual fund companies in the form of a prospectus. In 2004, the ERISA Advisory Council concluded that Form 5500s are of little use to policy makers, government enforcement personnel, and participants in terms of understanding the cost of a plan and recommended that Labor modify the form and its accompanying schedules so that all fees incurred directly or indirectly can be reported or estimated. Without information on all fees, Labor’s oversight is limited because it is unable to identify fees that may be questionable. Labor and plan sponsors also may not have information on arrangements among service providers that could steer plan sponsors toward offering investment options that benefit service providers but may not be in the best interest of participants. For example, the SEC released a report in May 2005 that raised questions about whether some pension consultants are fully disclosing potential conflicts of interest that may affect the objectivity of the advice. Plan sponsors pay pension consultants to give them advice on matters such as selecting investment options for the plan and monitoring their performance and selecting other service providers, such as custodians, administrators, and broker-dealers. The report highlighted concerns that these arrangements may provide incentives for pension consultants to recommend certain mutual funds to a 401(k) plan sponsor and create conflicts of interest that are not adequately disclosed to plan sponsors. Plan sponsors may not be aware of these arrangements and thus could select mutual funds recommended by the pension consultant over lower-cost alternatives. As a result, participants may have more limited investment options and may pay higher fees for these options than they otherwise would. In addition, specific fees that are considered to be “hidden” may mask the existence of a conflict of interest. Hidden fees are usually related to business arrangements where one service provider to a 401(k) plan pays a third-party provider for services, such as record keeping, but does not disclose this compensation to the plan sponsor. For example, a mutual fund normally provides record-keeping services for its retail investors, i.e., those who invest outside of a 401(k) plan. The same mutual fund, when associated with a plan, might compensate the plan’s record keeper for performing the services that it would otherwise perform, such as maintaining individual participants’ account records and consolidating their requests to buy or sell shares. The problem with hidden fees is not how much is being paid to the service provider, but with knowing what entity is receiving the compensation and whether or not the compensation fairly represents the value of the service being rendered. Labor’s position is that plan sponsors must know about these fees in order to fulfill their fiduciary responsibilities. However, if the plan sponsors do not know that a third party is receiving these fees, they cannot monitor them, evaluate the worthiness of the compensation in view of services rendered, and take action as needed. Labor officials told us about three initiatives currently under way to improve the disclosure of fee information by plan sponsors to participants and to avoid conflicts of interest: Labor is considering promulgating a rule regarding the fee information required to be furnished to participants in plans where sponsors have elected liability protection. According to Labor officials, they are attempting to define the critical information on fees that plan sponsors should provide to participants and what format would enable participants to easily compare the fees across the plan’s various investment options. Labor has proposed changes to the Form 5500 Schedule A and Schedule C to improve reporting of fees. Labor proposed to add a check box on Schedule A to improve the disclosure of insurance fees and commissions and identify insurers who fail to supply information to plan sponsors. According to a 2004 ERISA Advisory Council report, many employers have difficulty obtaining timely Schedule A information from insurers. Consistent with recommendations made by the ERISA Advisory Council Working Groups and GAO, Labor proposed changes to the Schedule C to clarify that the plan sponsor must report any direct and indirect compensation (i.e., money or anything else of value) it pays to a service provider during the plan year. Plan sponsors also would be required to disclose the source and nature of compensation in excess of $1,000 that certain key service providers, including, among others, investment managers, consultants, brokers, and trustees as well as all other fiduciaries, receive from parties other than the plan or the plan sponsor, such as record keepers. Labor officials told us that the revision aims to improve the information plan sponsors receive from service providers. The officials acknowledge, however, that this requirement may be difficult for plan sponsors to fulfill without an explicit requirement in ERISA for service providers to give plan sponsors information on the fees they pay to other providers. The third initiative involves amending Labor’s regulations under section 408(b)(2) of ERISA to define the information plan sponsors need in deciding whether to select or retain a service provider. According to Labor, plan sponsors need information to assess the reasonableness of the fees being paid by the plan for services rendered and to assess potential conflicts of interest that might affect the objectivity with which the service provider provides its services to the plan. This change to the regulation would be intended to make clear what plan sponsors need to know and, accordingly, what service providers need to provide to plan sponsors. To ensure that participants have a tool to make informed comparisons and decisions among plan investment options, we recommended in our previous report that Congress consider amending ERISA to require all sponsors of participant-directed plans to disclose fee information of 401(k) investment options to participants in a way that facilitates comparison among the options. To better enable the agency to effectively oversee 401(k) plan fees, we recommended that the Secretary of Labor should require plan sponsors to report a summary of all fees that are paid out of plan assets or by participants. To allow plan sponsors, and ultimately Labor, to provide better oversight of fees and certain business arrangements among service providers, we also recommended that Congress should consider amending ERISA to explicitly require that 401(k) service providers disclose to plan sponsors the compensation that providers receive from other service providers. In response to our draft report, Labor generally agreed with our findings and conclusions. Specifically, Labor stated that it will give careful consideration to GAO’s recommendation that plans be required to provide a summary of all fees that are paid out of plan assets or by participants. Labor and SEC also provided technical comments on the draft, which we incorporated as appropriate. The pension plan universe has changed: 401(k) plans have emerged to cover most plan participants and the majority of plan assets. With this shift, participants now bear more responsibility for ensuring they have adequate income in retirement, emphasizing the importance of having sufficient information to make informed 401(k) investment decisions. Information about investment options’ historical performance is useful, but alone is not enough. Thus, giving participants key information on fees for each of the plan’s investment options in a simple format—including fees, historical performance, and risk—will help participants make informed investment decisions within their 401(k) plan. In choosing between investment options with similar performance and risk profiles but different fee structures, the additional provision of expense ratio data may help participants build their retirement savings over time by avoiding investments with relatively high fees. Regulators, too, will need to have better information to provide more effective oversight, especially of the fees associated with 401(k) plans. Amending ERISA and updating regulations to better reflect the impact of fees and undisclosed business arrangements among service providers will help put Labor in a better position to oversee 401(k) plan fees. Furthermore, requiring plan sponsors to report more complete information to Labor on fees—those paid out of plan assets or by participants—would put the agency in a better position to effectively oversee 401(k) plans. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, or Tamara Cross, Assistant Director, Education, Workforce, and Income Security Issues at (202) 512-7215 or bovbjergb@gao.gov. Individuals making key contributions to this testimony include Daniel Alspaugh, Monika Gomez, Michael P. Morris, Rachael Valliere, Walter Vance, and Craig H. Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past two decades there has been a noticeable shift in the types of plans employers are offering employees. Employers are increasingly moving away from traditional defined benefit plans to what has become the most dominant and fastest growing type of defined contribution plan, the 401(k). As more workers participate in 401(k) plans, they bear more of the responsibility for funding their retirement. Given the choices facing participants, specific information about the plan and plan options becomes more relevant than under defined benefit plans because participants are responsible for ensuring that they have adequate income at retirement. While information on historical performance and investment risk for each plan option are important for participants to understand, so too is information on fees because fees can significantly decrease participants' retirement savings over the course of a career. As a result of employees bearing more responsibility for funding their retirement under 401(k) plans, Congress asked us to talk about the prevalence of 401(k) plans today and to summarize our recent work on providing better information to 401(k) participants and the Department of Labor (Labor) on fees. GAO's remarks today will focus on (1) trends in the use of 401(k) plans, and (2) the types of fees associated with these plans. There are an increasing number of active participants in 401(k) plans than in other types of employer-sponsored pension plans, a trend that has accelerated since the 1980s. Now, 401(k) plans represent the majority of all private pension plans; they also service the most participants and hold the most assets. These plans offer a range of investment options, but equity funds--those that invest primarily in stocks--accounted for nearly half of 401(k) assets at the close of 2005. Most 401(k) plans are participant-directed, meaning that a participant is responsible for making the investment decisions about his or her own retirement plan contributions. Inadequate disclosure and reporting requirements may leave participants without a simple way to compare fees among plan investment options, and Labor without the information it needs to oversee fees and identify questionable 401(k) business practices. The Employee Retirement Income Security Act (ERISA) of 1974 requires 401(k) plan sponsors to disclose only limited information on fees. Participants must collect various documents over time and may be required to seek out some documents in order to get a clear picture of the total fees that they pay. Furthermore, the documents that participants receive do not provide a simple way to compare fees--along with risk and historical performance--among the investment options in their 401(k) plan. The information reported to Labor does not identify all fees charged to 401(k) plans and therefore has limited use for effectively overseeing fees and identifying undisclosed business arrangements among consultants or service providers. As a result, participants may have more limited investment options and pay higher fees for these options than they otherwise would.
In 1992, DOD was the first federal agency authorized to offer buyouts to its employees, and it has been using buyouts since January 1993 to reduce the size of its workforce. On March 30, 1994, the FWRA authorized buyouts for other executive agencies and amended DOD’s authority. For both DOD and other executive agencies, employees generally were offered a buyout payment that was the lesser of $25,000 or their severance pay entitlement. According to OPM, in fiscal year 1996, $24,833 was the average buyout amount for regular optional retirements; $24,949 was the average for early retirements; and $14,499 was the average for resignations. The legislation granting DOD its initial buyout authority in 1992 did not impose any buyout-related conditions or repayment provision on buyout recipients who were reemployed by the federal government. However, DOD’s policy was that it would not rehire DOD buyout recipients within 1 year of their separation, unless an exception was approved by a high-level DOD official. In 1994, the FWRA required buyout recipients from federal agencies that were under the act’s authority, including DOD, to repay their buyouts if they returned to federal employment within 5 years of their separation. DOD buyout recipients had to repay their buyouts if they were reemployed as civil servants, but not if they were reemployed under personal services contracts. Non-DOD buyout recipients had to repay their buyouts if they returned directly to federal employment or if they were employed under a contract that was expressly identified or administered as a personal services contract. Also, under the FWRA, buyout recipients who were obligated to repay their buyouts could do so after an agency hired them. However, the agency rehiring the buyout recipient could seek a repayment provision waiver for the employee from OPM, in certain situations. Under new buyout authority enacted in 1996, the repayment provision was changed. Among other things, employees who accept buyouts under the 1996 authority must repay the entire buyout before their first day of federal reemployment, and there is no authority for waivers. Congress also passed other laws providing specific statutory authority for repaying buyouts for employees in selected agencies. The agency-specific buyout authorizations generally require that recipients repay their buyouts if they rejoin the federal workforce. In addition, under the time frames of the current buyout laws, agencies will need to verify buyout recipients’ compliance with repayment provisions through 2006. (See app. I for additional information on selected buyout laws enacted from 1992 to 1997.) “(1) performance on site; (2) the principal tools and equipment are furnished by the government; (3) the services are applied directly to the integral effort of agencies or agency components to further their assigned function or mission; (4) the performance of comparable services and the meeting of comparable needs in the same or similar agencies using civil service personnel; (5) the need for the type of service provided can reasonably be expected to last beyond 1 year; (6) the requirement of government direction or supervision of contractor employees because of the inherent nature of the service, or the manner in which it is provided, in order to adequately protect the government’s interest, retain control of the function involved, or retain full personal responsibility for the function supported in a duly authorized federal officer or employee.” The FAR also states that the key question to consider in determining whether a personal services contract exists is the following: Will the government exercise relatively continuous supervision and control over the contract personnel performing the contract? From January 1993 to June 1995, OPM’s CPDF data showed that 87,743 federal employees took buyouts and that federal agencies reemployed 394 of the buyout recipients as civil servants. However, according to CPDF data as of September 1996, the number of employees who had accepted buyouts grew to 128,467, or an additional 40,724 employees. The number of buyout recipients who are working as contractors to federal agencies or as contractor employees is unknown because no governmentwide data are available. To gather information concerning the 23 buyout recipients who were discussed in our October 1996 letter, we sent letters on November 8, 1996, to the OIGs of the 9 federal agencies identified in OPM’s CPDF data as employing these individuals. We asked the OIGs whether the buyout recipients had returned to federal employment and, if so, whether they had repaid the buyout or met DOD’s reemployment policy. We limited the scope of our work to the buyout recipients who were reemployed as federal employees between January 1993 and June 1995. Among other things, our letters to the OIGs identified the buyout recipients by name and social security number and asked specific questions concerning their reemployment. We asked the OIGs to review the 23 buyout recipient cases, provide us with information concerning the clarification of conflicting data and apparent violations, and take any needed action, as appropriate. In cooperation with the OIG offices, we contacted selected federal agencies’ personnel officials about the status of some cases. We also followed up with the OIGs and agency personnel officials by providing them with additional information, such as detailed case information and OPM documentation. To determine if the 9 federal agencies that were identified in the CPDF data as employing the 23 buyout recipients had internal control procedures in place to provide a reasonable assurance of compliance with buyout reemployment requirements, we asked the agencies to provide us with copies of these procedures. Specifically, we asked the agencies for copies of their internal control procedures on the reemployment of buyout recipients either as members of the civil service or as personal services contract employees. To determine whether other selected agencies had internal control procedures for the reemployment of buyout recipients under personal services contracts, we first had to decide which agencies to select for the review. To do so, we reviewed OPM’s August 1996 interim report to Congress on the reemployment of buyout recipients. In that report, OPM said that none of the agencies in its review reported cases involving buyout recipients’ returning to work under personal services contracts, but that several agencies reported having completed or having begun reviews and follow-ups of their contracting arrangements. In its report, OPM identified DOD and DOT as having conducted reviews of their contracting arrangements and GSA as having a review under way. DOD reported to OPM that it had not reemployed any buyout recipients under personal services contracts. However, OPM asked DOD to provide an updated report because DOD had looked only at DOD buyout recipients and not recipients from other agencies. As a result, we excluded DOD from our review of this matter. We decided to review the OIG audit reports of DOTand GSA to (1) gain an understanding of the experiences agencies have had with buyout recipients’ being hired under personal services contracts and (2) learn what kind of internal control procedures may be applicable. Because governmentwide data on buyout recipients hired under personal services contracts do not exist and because of time constraints, we limited the part of our review concerning internal controls for personal services contracts to the agencies that had the two OIG audits. We also obtained OPM’s 1994 and 1996 written guidance as well as OPM’s list of possible options that agencies could take to help ensure compliance with the buyout repayment provision, and we discussed this guidance with OPM officials. We did our review in Washington, D.C., from October 1996 to October 1997 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Director of OPM and to the heads of the 9 agencies identified as employing the 23 buyout recipients in our review and requested their comments. These comments are discussed at the end of this report. Of the 23 cases that we asked the OIGs to examine, the agencies confirmed that 9 cases violated FWRA’s repayment provision and that 2 cases violated DOD’s reemployment policy. The remaining 12 cases were not violations, but they were identified in the CPDF because of inaccurate data. Also, one additional case of reemployment of a buyout recipient without repayment was found by the U.S. Department of Agriculture’s (USDA) OIG while researching the case about which we asked. In addition, OPM found FWRA repayment provision violations in two other research efforts it undertook, but it is unclear whether these violations are in addition to the violations that we found. The 9 federal agencies identified in the CPDF data as employing the 23 buyout recipients provided information showing that a violation of FWRA’s repayment provision or DOD’s policy had occurred in 11 of the 23 cases. (See app. II for the employment status of the 23 buyout recipient cases, by federal agency.) The agencies reported that in 9 of the 11 cases, FWRA’s repayment provision was violated. These agencies also reported that they hired the nine buyout recipients without seeking repayment of the buyout. The remaining two cases were violations of DOD’s reemployment policy, which did not require buyout repayment. After determining that a violation had occurred, the agencies varied in how they responded. In general, they based their actions on whether they still employed the buyout recipient. For example, of the nine cases with violations of FWRA’s repayment provision, six cases involved individuals who were still employed, and the hiring agencies arranged for the buyout recipients to make repayments. In the three cases where the buyout recipients were no longer in their employ, the hiring agencies billed the recipient in one case, and took no action in the other two cases.Although the agencies did not seek repayment in these two cases, the law requires that if a buyout recipient accepts reemployment within 5 years of the separation, he or she is required to repay the buyout unless a waiver is granted by OPM. Therefore, if a buyout recipient is reemployed without repaying the buyout, the hiring agency is to seek recovery of the debt. The hiring agency has this obligation even though any money it recovers must go to the agency that originally paid the buyout, which may not be the agency rehiring the buyout recipient. OPM has instructed agencies that when a buyout recipient is reemployed by another agency, the two agencies should coordinate efforts to collect the buyout amount. In the two cases that violated DOD’s reemployment policy, DOD’s OIG reported that the Department had rehired both buyout recipients in violation of its policy not to rehire such individuals within 1 year of their separation unless a high-level DOD official grants an exception. DOD subsequently waived its policy for one buyout recipient; the other recipient resigned. Both of the buyout recipients received their buyouts before FWRA was enacted; therefore, the repayment provision did not apply. Agency officials told us that 12 of the cases we identified were not repayment violations. In 6 of the 12 cases, the federal agencies said that they mistakenly had submitted inaccurate data to OPM’s CPDF, which we used in our previous review to identify potential violators. The data that the agencies submitted to the CPDF showed that they had reemployed six buyout recipients. However, the agencies reported that, in a follow-up review of their records, they discovered that they had not reemployed these recipients. For example, DOD said that three buyout recipient cases it reported involved individuals who had retired after receiving the buyouts but that the recipients were identified in Department data as employees, even though they were never rehired. DOD explained that the positions the three individuals once held were part of a large transfer of positions within the Department and that an error was made in recording the transfer. For the remaining 6 of the 12 cases, the agency officials said that they had no record of ever employing the buyout recipients. At the time of our previous review, the CPDF erroneously listed these agencies as the recipients’ employers. While researching the case we inquired about, USDA’s OIG reported finding another buyout recipient that USDA had employed who was required to repay the buyout as a condition of reemployment but who had not done so. According to the OIG, USDA planned to bill the buyout recipient to recover the buyout debt. USDA had hired this buyout recipient after the review period covered in our October 1996 letter (i.e., Jan. 1993 through June 1995). OPM conducted two research efforts on reemployed buyout recipients; one collected data using a survey, and the other used an analysis of CPDF data. The results of the two research efforts are summarized in OPM’s 1996 interim report. OPM’s survey, which was conducted from March 30, 1994, through May 20, 1996, and included the heads of all of the cabinet-level agencies and most smaller independent agencies, reported that 46 of the 80 buyout recipients whom the agencies reemployed had possibly not repaid their buyouts. According to OPM’s survey, 40 of the possible 46 violations were in Defense agencies (see app. IV for details), and the remaining 6 possible violations were in non-Defense agencies. OPM said that 34 of the 80 cases were in compliance with the FWRA repayment provision. Although 58 agencies responded to the survey, according to OPM, 10 agencies that had used buyouts did not respond in time for OPM to include them in the interim report. OPM also did a study of possible buyout recipients who may have been reemployed during March 30, 1994, through June 30, 1995. This study, which used CPDF data, found a possible 49 reemployed buyout recipients, 9 of whom had violated the FWRA’s repayment provision. For the remaining 40 cases, OPM determined that 38 cases had complied with the provision, and that 2 cases needed further resolution. As of October 1997, OPM officials said that they were still verifying the number of possible buyout violations in both of its research efforts, and that there was no way to be certain of the differences between the number of reemployed buyout recipients identified under OPM’s two research efforts or with our review. This uncertainty is because the OPM survey effort did not collect names and social security numbers, which could have been compared with the CPDF data in either OPM’s effort or our review for verification. Our review and OPM’s study both were based on CPDF data, and the time frame of OPM’s study was encompassed in our review. However, the methodologies used to extract the data were not the same, which provided different results. For instance, although OPM’s study confirmed 10 FWRA repayment provision violations and we confirmed 9, only 5 of the violations we confirmed were also confirmed by OPM; consequently, 5 of the violations OPM confirmed were not confirmed by our review. Federal agencies have an obligation to ensure that the FWRA repayment provision is met when buyout recipients are reemployed as civil servants, or when they work under contract expressly identified or administered as personal services contracts for the government. Agency management is responsible for establishing effective internal controls to help ensure compliance with laws and regulations. Internal controls consist of policies and procedures used to provide reasonable assurance that (1) goals and objectives are met; (2) resources are adequately safeguarded, efficiently used, and reliably accounted for; and (3) laws and regulations are being followed. However, none of the 9 agencies that we asked to provide the status of the 23 buyout recipient cases had adequate internal control procedures in place to provide reasonable assurance that the FWRA repayment provision was met. This was the case despite OPM’s 1994 and 1996 guidance to agencies on the FWRA repayment provision as well as OPM’s list of options explaining steps agencies could take to identify returning buyout recipients, steps that OPM said were inexpensive to implement. In GSA and DOT, which entered into contracts involving buyout recipients, the OIGs reported that GSA and DOT’s Federal Aviation Administration (FAA) did not have adequate procedures to prevent violations of the FWRA repayment provision. According to both OIGs, the internal control procedures of those agencies could not be used to determine whether contracts were administered as personal services contracts and, therefore, whether the contract personnel who were buyout recipients were subject to the FWRA repayment provision. To help ensure compliance with the FWRA repayment provision, the Department of State created a form for job applicants to complete (and sign and date) indicating whether they had received a buyout within the previous 5 years. By having an applicant certify his or her buyout status in writing, the State Department would have documented evidence of the applicant’s response to the question of receiving a buyout should a question arise after the applicant was hired. However, according to a State Department official, if an applicant indicated that he or she were a buyout recipient, the agency had no personnel procedures in place to ensure that the appropriate buyout repayment provision was satisfied. Such certification was not required by the other 8 agencies from which we requested information on the 23 buyout recipient cases. According to OPM, a fundamental step for agencies to help ensure compliance with the repayment provisions of the various buyout authorities is for the agencies to identify whether job applicants are former federal employees and, if so, whether they had received a buyout. OPM’s instructional pamphlet for job applicants, which is entitled Applying for a Federal Job, states that individuals may apply for federal employment using either of two documents. Applicants may use a résumé or the Optional Application for Federal Employment - Optional Form 612,which asks individuals to provide information about their work history, including dates of employment, and to certify if they had ever been a civilian employee with the federal government. According to OPM, when applicants indicate on any of these applications for employment that they have prior federal service, hiring officials are to ask the applicant whether he or she received a buyout. Job applicants who submit résumés are to provide the same information that is requested on the Optional Application For Employment—that is, work history and whether they had ever been federal employees. Of course, federal agencies must depend on job applicants’ truthfully reporting such information. However, the Optional Application For Employment and the instructional pamphlet for résumés state that providing false information is grounds for not hiring the applicant, for firing the applicant after he or she is employed, and for imposing a fine or prison sentence on the applicant. In our discussions with OPM officials, they said that having job applicants complete a certification form, like the one developed by the State Department, would help agencies identify applicants who were buyout recipients. The officials explained that information on whether an applicant had received a buyout may not be readily available to the hiring agency. For example, the information may be in the individual’s official personnel folder, which the hiring agency may not receive for several weeks, or in a computer system that is located at the agency that paid the buyout. In addition, such certification would assist agencies that were hiring individuals who had received buyouts from other agencies, especially those agencies that do not participate in the CPDF, such as those in the judicial and legislative branches of government. For agencies covered by 5 C.F.R. 7.2, it is mandatory that they provide OPM with personnel information for use in the CPDF, among other things, unless specifically exempted by statute. The need for hiring officials to readily know whether a job applicant is a former federal employee who took a buyout was made even more important by the enactment of the 1996 legislation. As previously mentioned, this legislation requires buyout recipients under its authority to repay the full buyout amount before the employee’s first day of work. Of the nine agencies, our review showed that only USDA and the Department of the Treasury had issued guidance notifying component heads of the FWRA repayment provision. In addition, as a result of our inquiry into apparent FWRA buyout repayment violations at these components, one component in each of these agencies—USDA’s Animal and Plant Health Inspection Service (APHIS) and Treasury’s Internal Revenue Service (IRS)—developed and issued procedures that could be used to help prevent future buyout violations, according to agency officials. USDA issued notification of the FWRA repayment provision on July 18, 1996, and Treasury issued its notification on February 10, 1997. Each agency’s notification primarily consisted of OPM’s guidance entitled Reemployment, Personal Services Contracts, and the Repayment of Voluntary Separation Incentives, which was dated March 1996. In addition, Treasury’s procedures included OPM’s list of possible options for agencies but did not establish procedures to implement the options. According to its officials, APHIS developed internal control procedures, which were adopted on November 21, 1996, that require its personnel officials to screen job applicants and check new employees to help ensure that APHIS and its employees are in compliance with FWRA’s and other buyout authorities’ repayment provisions. However, although the procedures APHIS officials provided to us require personnel officials to review job applications to identify whether individuals had previous federal service and, if so, took voluntary buyouts, APHIS did not have procedures that personnel officials should follow if they identified such service or receipt of a buyout. In addition, APHIS did not have the applicants certify their buyout status. IRS issued optional procedures on December 24, 1996, to help ensure that rehired buyout recipients comply with the repayment provision under a particular buyout authority. These procedures were based, in part, on OPM’s list of options. However, IRS’ procedures are not required and, therefore, cannot ensure that the IRS is in compliance with FWRA’s and other buyout authorities’ repayment provision. According to a DOD spokesperson, who also represented the Departments of the Army, Navy, and Air Force, DOD had programs in place that addressed some of OPM’s optional procedures to help ensure compliance with the FWRA repayment provision. One of these programs was “Operation Mongoose,” which was created to prevent and detect financial fraud in DOD. The program, which was implemented in June 1994, compares DOD’s automated data with those of other agencies to point out probable fraud and ensure that erroneous payments are not being made. In March 1995, DOD first used the program to detect DOD buyout takers who had returned to work in the federal government. In addition, DOD mailed two publications to its personnel directors, which are also available via the Internet and E-mail, to notify them of various personnel matters, including their responsibilities for buyout repayment provisions. Although DOD’s efforts to identify financial fraud and to notify personnel directors are useful steps, DOD did not have procedures in place during the employment application process to (1) identify whether job applicants were buyout recipients and (2) help ensure buyout repayment as required by the FWRA repayment provision and the more recent buyout authorities. Neither the Department of Justice nor Treasury had internal control procedures to help ensure that buyout recipients who return to federal employment comply with the FWRA repayment provision. A Department of Veterans Affairs (VA) official said that the Department had no written guidance that focused on buyout recipients, and that the only way that VA determines whether an individual is a buyout recipient is to review the individual’s Notification of Personnel Action (Standard Form 50) form, which was generated from his or her personnel office. A Justice official also said that the Department had no written procedures concerning buyout recipients who return to federal reemployment. At DOT, several former FAA employees who had received buyouts returned to work at FAA as employees of DOT contractors. Because of telephone “hot line” complaints relating to the legality of those employees’ return, the DOT OIG examined and reported on whether FAA and the rest of DOT were complying with the FWRA repayment provision. Partly as a result of the DOT OIG’s report, GSA’s OIG examined and reported on whether any former GSA employees who had received buyouts had returned to GSA as employees of contractors. The two audits found that (1) 27 former DOT and GSA employees were working under contracts that, although not identified as personal services contracts, were being administered as such and (2) these employees had not repaid or arranged to repay their buyouts, as required by the FWRA. The DOT OIG examined 260 cases of buyout recipients—20 former FAA employees and 240 former employees of other DOT agencies—who had returned to work for DOT contractors. The OIG reported violations in some of the FAA cases but did not find any problems with the other DOT cases. According to the OIG’s 1996 report, FAA allowed 17 of the 20 former employees to return to work under contracts, which were administered as personal services contracts, without meeting the FWRA buyout repayment provision. The OIG’s report attributed these 17 violations to inadequate internal control procedures at FAA and inadequate enforcement of FAA’s guidance by its contracting officers. The report also stated that buyout payments totaling $425,000 for the 17 employees should be recouped, and that the OIG had referred these violations to DOT’s Office of Investigations for coordination with the United States Attorneys to begin the process of seeking buyout repayments. The GSA OIG reviewed the cases of 39 former GSA employees who had received buyouts and were employed by contractors working for GSA. Of the 39 cases, the OIG determined that 10 employees were, in effect, working under personal services contracts without meeting the FWRA buyout repayment provision. As in the case of the 17 employees at FAA, these 10 employees were hired under contracts that were actually being administered as personal services contracts. The OIG attributed these violations to GSA’s lack of adequate policy guidance for defining a personal services contract. The OIG said that program managers, buyout recipients, and contracting officers did not fully understand what a personal services contract was or under what conditions a buyout recipient could return to federal employment without repaying the buyout. The GSA OIG also said the risk was increasing that more GSA buyout recipients may return to work for GSA under personal services contracts without repaying their buyouts. The OIG said that a number of additional buyout recipients had already returned to work under various contracts, some of which were being administered as personal services contracts. The OIG explained that GSA staffing had decreased 21 percent overall from its 1993 level, which would require GSA program offices to reduce program services, contract out work to maintain workload, or do both. Increased contracting, according to the GSA OIG, heightens the risk of buyout recipients’ returning under personal services contracts. However, the OIG did not believe that action should be taken against the 10 buyout recipients it found in violation because the OIG did not find that any of the instances appeared to be willful or deliberate attempts to circumvent the FWRA repayment provision. In fact, the OIG added, the buyout recipients took specific steps to try to comply with the FWRA, such as not performing the same functions, not working in their former offices, and not working as a contractor directly for the government. Although we did not attempt to determine whether any of the GSA contracts were, in effect, personal services contracts, if in fact they were, then the FWRA repayment provision would have been violated and the buyout debt would have to be recovered. The DOT and GSA OIG audits determined that violations of the FWRA repayment provision have occurred under agency contracts. However, the audits might have uncovered more violations if they had looked for all buyout recipients that were employed at the two agencies under service contracts and had not limited their search to their agency’s buyout recipients. As illustrated by the DOT and GSA audits, violations of the FWRA repayment provision may occur not only under contracts expressly identified as personal services contracts but also in connection with contracts that are administered as personal services contracts. As previously mentioned, OPM provided optional guidance to agencies on ways to help ensure compliance with the buyout repayment provision, and some of these suggestions pertained specifically to contracting. In its guidance, OPM suggested that agencies issue their own guidance to personnel involved in the oversight and management of contracts (e.g., contracting officers) and have them monitor compliance with the buyout repayment provisions, require contractors to identify and certify that contract employees who have received buyouts are not working in violation of the law, and require periodic spot checks of contracting personnel to help ensure compliance. These OPM suggestions were also recommended to some extent by the DOT and GSA OIGs in their reports. For example, the DOT OIG recommended that (1) FAA identify all of its employees who took buyouts and returned to work for FAA as employees of contractors and (2) the circumstances of each case be evaluated to determine whether FAA and the employees who took buyouts complied with the FWRA. The GSA OIG said that GSA’s policies and procedures for implementing the FWRA should be clarified, and that the clarification should include information explaining how a contract that is not intended to be a personal services contract can become one and what key actions to take if that happens. According to the GSA OIG, the clarified policies and procedures should be distributed to all employees who are scheduled to leave under the buyout program, all program managers, and all contracting officers. Our review found violations of the FWRA repayment provision and DOD’s reemployment policy as well as a lack of internal controls to help prevent such violations. OPM’s list of possible options that agencies could take to help ensure compliance with buyout repayment provisions generally was not implemented by the agencies we studied, even though OPM officials believe that doing so would not be costly to agencies. Because agency management is responsible for ensuring its compliance with laws and regulations, it is also responsible for establishing effective internal controls to avoid violations of such laws and regulations, including the FWRA repayment provision. Under the time frames of the current buyout laws, every federal agency will need to verify buyout recipients’ compliance with repayment provisions of the various buyout authorities through 2006. According to OPM, a fundamental step for agencies to help ensure compliance with the repayment provisions of the various buyout authorities is for the agencies to identify whether job applicants are former federal employees and, if so, whether they had received a buyout. Identifying buyout recipients who work under contracts with the government that are not expressly identified as personal services contracts, but are administered as such, appears to be more difficult than identifying buyout recipients who return directly to federal service. In the cases of DOT and GSA, their OIGs found that the agencies’ controls did not adequately identify contracts administered as personal services contracts. Thus, DOT and GSA found it difficult to identify buyout recipients who had returned under personal services contracts. For an agency to determine that a contract employee must comply with a repayment provision, it must first determine that the employee’s contract is expressly identified as, or is being administered as, a personal services contract. The need for agencies to be able to better recognize the administration of contracts as personal services contracts was pointed out by the audit report of GSA’s OIG. The audit report said that, as downsizing occurs, agencies are turning to contractors to accomplish tasks, and that some employees who leave agencies because of downsizing are working for those contractors. As a result, the DOT and GSA OIGs made recommendations in their reports to help ensure that their employees and contractors know what constitutes a personal services contract and how the identification of buyout recipients under such contracts could help prevent future repayment provision violations. To help ensure that agencies establish procedures to comply with the buyout repayment provisions of the FWRA and other buyout authorities, we recommend that the Director of the Office of Personnel Management (OPM) take the following actions to establish steps to identify potential violations of the provisions. Promulgate regulations requiring agencies to identify buyout recipients who (1) are applying to return or have returned directly to federal employment or (2) are applying to work for or already work for the federal government under a contract that is, by its terms, a personal services contract, or administered as such, and require them to repay their buyouts. In doing so, the Director may want to consider OPM’s list of possible options (see app. III) that agencies could take to help ensure compliance with the buyout repayment provisions. Create a form that job applicants would be required to complete to certify whether they were buyout recipients and, if so, from which agency they received the buyout. The Director may want to consider requiring that the form (1) be attached to employment résumés or to the Optional Application for Employment or (2) be completed only by those applicants to which agencies are considering making job offers. We provided a draft of this report for review and comment to the Director of OPM and the heads of the 9 agencies from which we had requested information on the 23 buyout recipient cases. In a letter dated August 15, 1997, the Director of OPM said that OPM does not oppose our recommendations but that it does question the need for these actions, because of the extent of the cooperation it has with the federal agencies. The Director said agencies have been extremely cooperative in responding to OPM’s requests for information regarding reemployed buyout recipients, regardless of whether the recipient is in violation of the FWRA repayment provision. The Director also said that we overestimated the scale of the FWRA repayment provision problem because we double-counted the number of violations by adding the number of violations we identified to those of OPM’s research efforts, although both of us very likely identified the same violations. We also received cooperation from agencies in tracking down the status of the 23 cases we reviewed. However, we contacted these agencies after the buyout recipients were rehired. Although the information the agencies provided may serve to assist in identifying violations after they have occurred, it does not prevent violations from occurring. Preventing violations is especially important for the more recent buyout laws, which require buyout recipients to repay their buyouts before their first day of reemployment with the federal government or employment under a personal services contract. Therefore, we continue to believe that agencies are obligated to have internal controls that are adequate to reasonably ensure compliance with the buyout provisions. In addition, we agree that our draft report included some instances of apparent double-counting, and we have made the appropriate changes in this report. Our review and one of OPM’s studies in its interim report were based on CPDF data, and the time frame of OPM’s study was encompassed in the period we reviewed. However, the methodologies used to extract the data were not the same, which provided different results. For instance, although OPM’s CPDF-based study confirmed 10 FWRA repayment provision violations and we confirmed 9, only 5 of the violations we confirmed were the same as those confirmed by OPM. Consequently, five of the violations that OPM confirmed were not confirmed by our review. In addition, the Director made a number of technical comments regarding accuracy or context in the draft report; we made these changes in this report where appropriate. See appendix V for a reprint of the OPM letter and our additional comments. On August 7, 1997, we met with the Director of Staffing and Career Development, Office of the Deputy Assistant Secretary of Defense (Civilian Personnel Policy), who provided oral comments on a draft of this report for the Department of Defense (DOD) and the Departments of the Army, Navy, and Air Force. The Director believed it would not be cost effective to comply with OPM’s suggested option to contact the approximately 95,000 buyout recipients who had left DOD and to remind them of the FWRA repayment provision, given the very small numbers of detected violations. However, she agreed that before prospective employees are hired, they should be required to certify whether they have received a buyout from a previous federal employer. Although we believe OPM’s suggested options are useful indicators of the steps that can be taken to help ensure compliance with buyout repayment provisions, we do not suggest that all of OPM’s options should be implemented by every agency. We believe that documenting whether prospective employees have received buyouts is a sound step to help ensure compliance, but it must be linked to procedures to help ensure that those who have received such payments repay them to satisfy the appropriate buyout provision. We received written comments on a draft of this report from the U.S. Department of Agriculture (USDA) in a letter dated August 8, 1997, from the Director of its Office of Human Resources Management. The Director provided no specific comments on our recommendations. However, he did express concern regarding what he perceived as an overemphasis on USDA in the draft report and an underemphasis on difficulties that the Department faced. Changes were made to this report to address these concerns as appropriate. See appendix VI for a reprint of USDA’s letter and our response to specific comments. We met with the Associate Deputy Assistant Secretary for Human Resources Management of the Department of Veterans Affairs (VA) on August 14, 1997, to obtain oral comments on the draft report. She said that VA was not against regulations as long as they are not prescriptive and inflexible. VA also agreed that a certification form could be useful. The Department of Justice’s Assistant Attorney General for Administration said, in a letter dated August 7, 1997, that Justice agreed with the recommendations in the draft report. The Assistant Attorney General added that Justice will continue to provide guidance to its organizational components on the need to exercise caution in rehiring buyout recipients. He said that Justice also intends to work closely with the Department’s Justice Management Division’s Procurement Services Staff to provide components with clear guidance on the definition of a “personal services contract.” See appendix VII for a reprint of Justice’s letter. In a letter dated August 13, 1997, the Department of the Treasury’s Assistant Director of the Office of Personnel Policy said that Treasury had no comment on the draft report. See appendix VIII for a reprint of the Treasury letter. We spoke with the Department of State’s GAO Liaison on August 22, 1997, to obtain oral comments on the draft report. She said that the State Department wanted us to define the “certain situations” we referred to when agencies could seek a waiver of repayment from OPM. We resolved this comment by providing additional information. She said that the State Department had no other comments. As arranged with your office, unless you announce the contents of this report earlier, we plan no further distribution until 15 days after its issue date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of interested congressional committees, the Director of OPM, the heads of the nine agencies included in our review, and other interested parties. Upon request, we will also make copies available to others. The major contributors to this report are listed in appendix IX. Please call me on (202) 512-8676 if you have any questions. The lesser of severance pay or $25,000. Separations must be made by September 30, 2001. DOD’s initial buyout legislation contained no reemployment requirements. However, it was DOD policy that employees could not be reemployed by any DOD installation in any capacity for 12 months following their separation. No restrictions were placed on their ability to return to non-DOD agencies. FWRA amended DOD’s buyout authority so that DOD employees who received a buyout on or after March 30, 1994, must repay the buyout or obtain a waiver from OPM, when they return to federal employment within 5 years. DOD employees do not have to repay the buyout if they return to federal employment under personal services contracts. The lesser of severance pay or $25,000. March 30, 1994, through March 31, 1995. Delayed buyouts were permitted through March 31, 1997. Employees who received buyouts must repay the buyout or obtain a waiver from OPM when they return to federal employment (including employment under personal services contracts) within 5 years. The lesser of severance pay or an amount determined by the agency head, not to exceed $25,000. October 1, 1996, through December 30, 1997. Employees who received buyouts must repay the buyout prior to the first day of federal government reemployment (including employment under personal services contracts) when they return to federal employment within 5 years. The lesser of severance pay or $25,000. September 26, 1996, through September 30, 2000. Employees who received buyouts must repay the buyout prior to the first day of federal government reemployment (including employment under personal services contracts) when they return within 5 years. Repayment may be waived if the individual possesses unique abilities and is the only qualified applicant available for the position. The lesser of severance pay or (1) $25,000 from enactment through FY 1997, (2) $20,000 in FY 1998, (3) $15,000 in FY 1999, or (4) $10,000 in FY 2000. October 1, 1996, through September 30, 2000. Employees who received buyouts must repay the buyout prior to the first day of federal government reemployment (including employment under personal services contracts) when they return within 5 years. No provision to waive repayment is provided. The lesser of severance pay or an amount determined by the agency head, not to exceed $25,000. August 20, 1996, through January 31, 1997. Employees who received buyouts must repay the buyout prior to the first day of federal government reemployment (including employment under personal services contracts) when they return within 5 years. (continued) Determination to be made by the Secretary, but shall not exceed $25,000. April 26, 1996, through October 1, 1996. Employees who received buyouts must repay the buyout upon reemployment with the federal government within 5 years. Repayment may be waived by the Secretary of the Smithsonian. Repayment is not required if employee returns under a personal services contract. Listed below are the possible options that the Office of Personnel Management (OPM) developed and encouraged agencies to use to help them comply with buyout repayment provisions. We have reordered and categorized the options on the basis of their application; however, the text of each option is quoted directly from OPM’s original list. “Alert agency hiring officials. Some existing buyout authorities (i.e., Agriculture and NASA) provide for the payment of buyouts through as late as September 30, 2000. Thus, some buyout takers will be covered under the repayment requirement through at least September 30, 2005. Agency hiring officials are advised to judiciously review applicants for Federal jobs at least through September 30, 2005, to insure that employees covered by the repayment requirements are repaying the entire amount of the incentive or that they are not being reemployed.“Scrub agency payroll and/or personnel records. Agencies may conduct periodic checks to identify employees who have received buyouts and who are now reemployed by a Federal agency. The Nature of Action Code (NOAC) for separation incentives is 825. OPM is also conducting these checks through the Central Personnel Data File.” “Review agency’s contract agreements. Structure contractual agreements involving personal services contracting to address contractors’ use of former Federal employees who have received buyouts. Additional options include requiring contractors to identify and certify that contract employees who have received buyouts are not working in violation of the law. “Alert agency contract management personnel. Issue guidance to personnel involved in contracting oversight and management for use in monitoring compliance. “Require periodic spot checks of contracting personnel to ensure compliance.” “Remind each agency manager and/or supervisor of the repayment requirement and provide guidelines for identifying violations. “Post reminders in agency benefits or retirement office. This is a good location to reach employees who have retired with incentives. “Contact buyout recipients and remind them of repayment requirements. Agencies may opt to send informational mailers to employees to remind them of applicable repayment rules.” According to the Department of Defense (DOD), the Office of Personnel Management (OPM) identified potential DOD violations of the Federal Workforce Restructuring Act (FWRA) repayment provision in two lists. The March 15, 1996, list identified 51 possible violations that were based on a survey completed by DOD for OPM, and its January 3, 1997, list, which was based on OPM’s Central Personnel Data File research effort, identified 11. Of the 51 potential violations in the March 1996 list, OPM and DOD determined that 11 had either repaid their buyouts or were inappropriately identified as DOD personnel subject to the FWRA repayment provision. Table IV.1 shows the results of DOD’s investigation of the 40 remaining cases. Some of these cases were violations of the repayment provision, but it is not clear exactly how many were violations. For instance, to the extent that the 17 “collections in progress” were initiated at the time the individual applied for the job, they may not represent violations. Of the 11 potential violations that DOD said OPM identified in its January 1997 list, DOD reemployed two buyout recipients. One had made the repayment; the other was making repayment. The following are GAO’s comments on the Office of Personnel Management’s letter dated August 15, 1997. 1.OPM said that we should refer to 5 C.F.R. 576 in our report. This regulation on buyouts, repayments, and waivers of repayment was published on November 9, 1994. We had not specifically referred to OPM’s regulation 5 C.F.R. 576 in our draft report because it was not pertinent to our focus on agencies’ internal controls. Section 576.101 of the regulation provides guidance on who is covered by the buyout conditions, what is covered, what is required (the buyout recipient must repay the entire amount of the buyout to the agency that gave the buyout), and exceptions under the repayment provisions. However, the section is stated generally and does not address what agencies should do to help ensure that returning buyout recipients comply with the law. Section 576.102 deals with buyout recipients’ requests for OPM’s approval for waivers of the repayment provision, and, while it does not deal with what agencies should do to help ensure compliance with the provision, this section is an example of the instructional approach OPM could use in regulations requiring agencies to adopt internal control procedures. We have added a reference to 5 C.F.R. 576 to the report to provide additional information on waivers of the repayment provision in accordance with OPM’s and another agency’s suggestion. 2.OPM said that the draft report needed to more accurately reflect its analysis and findings regarding buyout recipients who were reemployed in violation of the repayment requirement, particularly the differences in the methodologies used in the two analyses OPM conducted and the most current data available from OPM. We had not distinguished between OPM’s two research methodologies because it did not make that distinction in its interim report, which we cited in the draft. On the basis of information OPM provided in its comments, we made changes to make that distinction clear in the final report. In addition, our draft report had contained the most current data OPM had said was available prior to providing us its comments. OPM provided, subsequent to our receiving its comments on our draft report, a more current list of confirmed repayment provision violators, which we used in the final report. On the basis of clarifying information OPM provided in, and subsequent to, its comments, we also made changes to the report to recognize that the repayment violations found by OPM could overlap with those we found. In addition, we clarified in the report the time frames for our effort and OPM’s two research efforts. Although the time frames overlapped, they were not identical. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated August 8, 1997. 1.USDA did not believe that our draft report sufficiently recognized the difficulty agencies face in enforcing the buyout repayment requirements in cases where buyout recipients do not reveal that they have received buyout payments. We believe that our draft report did recognize the responsibility of buyout recipients to reveal their buyout status; our recommendation that a form be created on which job applicants would certify their buyout status explicitly recognizes this responsibility of buyout recipients. However, although any failure of buyout recipients to acknowledge their status when reapplying for federal employment can make enforcing the law more difficult, agencies nevertheless retain responsibility for ensuring compliance. We recommended that OPM promulgate regulations requiring that agencies take steps to identify buyout recipients who need to repay their buyout because the agencies we reviewed had not established procedures that provided a reasonable assurance of compliance with the repayment requirement. 2.USDA said it was important to note that discrepancies in OPM’s CPDF data compared with data in agency reports can contribute to the difficulty that agencies may have in identifying repayment violations and observed that such discrepancies explained some of the possible violations we had found. We agree that discrepancies between the CPDF and agency reports can make use of the CPDF an imperfect mechanism for identifying possible buyout repayment violations. However, we did not recommend that agencies rely on the CPDF to identify possible violations. Use of the CPDF could be but one of several options for identifying possible violations. To the extent that the CPDF is used, discrepancies in CPDF data can, at least in part, be reduced by the agencies themselves—many of the inconsistencies between CPDF data and agency reports were due to agencies’ not having provided updated, accurate data to OPM. 3.USDA was concerned that the description and placement of references to USDA violations at the beginning of the draft report implied that USDA was the first and most significant violator. Our use of the USDA example was intended to show the proactive response of this agency to the situation, which distinguished it from the other agencies, and to show its recognition of the importance of compliance with the law. However, due to USDA’s concerns, we modified the report to lessen the emphasis on USDA’s experiences. 4.USDA expressed concern about us not mentioning that it had issued repayment provision procedures to the entire Department on July 18, 1996. Although we requested that agencies provide us with copies of their procedures, we only received a copy of APHIS’ procedures from USDA officials and were told that they were not aware of any other USDA procedures. We have changed the report to reflect that USDA had issued notification of the FWRA repayment provision to its components and that APHIS subsequently developed internal control procedures. Alan Belkin, Assistant General Counsel Victor B. Goddard, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether: (1) the 23 buyout recipients returned to federal employment and, if so, whether they repaid the buyout or met the Department of Defense (DOD) reemployment policy; and (2) the 9 agencies that were identified as employing these 23 buyout recipients and other selected agencies, which may have buyout recipients under contract, had internal procedures in place to help ensure that buyout recipients repay buyouts when required to do so. GAO noted that: (1) the information provided to GAO by the appropriate agencies' Office of Inspector General (OIG) and personnel office showed that a violation of the Federal Workforce Restructuring Act (FWRA) repayment provision or the DOD reemployment policy occurred in 11 of 23 cases; (2) the FWRA repayment provision was violated in 9 of the 11 cases, and the DOD reemployment policy was violated in the 2 other cases; (3) the remaining 12 cases were not violations, although they had originally appeared to be questionable because of discrepancies between agency reports and data in the Office of Personnel Management's (OPM) Central Personnel Data File (CPDF), which GAO used as a source of information; (4) in addition, while researching 1 of the 23 cases, an agency OIG found that the agency employed an additional buyout recipient who had not repaid the buyout; (5) regarding internal control procedures, none of the 9 agencies that GAO contacted for information on the 23 buyout recipient cases had adequate internal control procedures in place to provide reasonable assurance that the FWRA repayment provision was met; (6) two other agencies notified their personnel officers of the FWRA repayment provision; however, only one component of each agency developed additional procedures to help ensure compliance with the provision; (7) in addition to buyout recipients who return directly to federal employment, some buyout recipients work under contract for the federal government; (8) some of these contract personnel are employed under contracts that are expressly identified as personal services contracts and, thus, are subject to the FWRA repayment provision; and (9) in addition to these personnel, other contract personnel who are subject to relatively continuous supervision and control by agency officials are, in effect, working under personal services contracts and are subject to the FWRA repayment provision.
Over the last decade, the use of federal service contracting has increased and now accounts for over 60 percent of federal procurement dollars spent annually. A performance-based approach to federal service contracting was introduced during the 1990s, representing a shift from specifying the way in which contractors should perform work to specifying acquisition outcomes. Regardless of the contracting method, focusing on outcomes and collaboration among multiple stakeholders in the contracting process has been acknowledged as sound contract management. In 2000, federal procurement law established a performance- based approach as the preferred acquisition method for services. The Federal Acquisition Regulation requires all performance-based service acquisitions to include a performance work statement that describes outcome-oriented requirements in terms of results required rather than the methods of performance of the work; measurable performance standards describing how to measure contractor performance in terms of quality, timeliness, and quantity; and the method of assessing contract performance against performance standards, commonly accomplished through the use of a quality assurance surveillance plan. A 1998 Office of Federal Procurement Policy (OFPP) study on performance-based contracts—based largely on contracts for basic services, such as janitorial or maintenance services—showed that a number of anticipated benefits had been achieved, including reduced acquisition costs, increased competition for contracts, and improved contractor performance. However, implementing a performance-based approach is often more difficult for complex acquisitions, such as information technology, than it is for basic services, because agencies begin with requirements that are less stable, making it difficult to establish measurable outcomes. Such complex acquisitions may need to have requirements and performance standards continually refined throughout the life-cycle of the acquisition for a contractor to deliver a valuable service over an extended period of time. OFPP also has noted in policy that certain types of services, such as research and development, may not lend themselves to outcome-oriented requirements. To encourage agencies to apply a performance-based approach to service acquisitions, the Office of Management and Budget (OMB) established governmentwide performance targets, which increased to 50 percent of eligible service contract dollars for the current fiscal year. In January 2007, the congressionally mandated Acquisition Advisory Panel reported that performance-based acquisition has not been fully implemented in the federal government, despite OMB encouragement, and recommended that OMB adjust the governmentwide target to reflect individual agency assessments and plans. In May 2007, OMB’s OFPP issued a memo providing that agencies, at a minimum, were expected to meet targets established and report on them in their management plans. In response, DHS’s CPO established a performance-based target of 25 percent for fiscal year 2007, increasing to 40 percent by fiscal year 2010, that was included in DHS’s Performance-Based Management Plan. The Acquisition Advisory Panel also recommended that OFPP issue more explicit implementation guidance and create an “Opportunity Assessment” tool to help agencies identify when they should consider using this acquisition method. Our work has found that performance-based acquisitions must be appropriately planned and structured to minimize the risk of the government receiving services that are over cost estimates, delivered late, and of unacceptable quality. Specifically, we have emphasized the importance of clearly defined requirements to achieving desired results and measurable performance standards to ensuring control and accountability. Prior GAO and DHS Inspector General reviews of complex DHS investments using a performance-based approach point to a number of shortcomings. For example, in June 2007, we reported that a performance-based contract for a DHS financial management system, eMerge2, lacked clear and complete requirements, which led to schedule delays and unacceptable contractor performance. Ultimately, the program was terminated after a $52 million investment. In March 2007, we similarly reported that the Coast Guard’s performance-based contract for replacing or modernizing its fleet of vessels and aircraft, Deepwater, had requirements that were set at unrealistic levels and were frequently changed. This resulted in cost escalation, schedule delays, and reduced contractor accountability. The DHS Inspector General has also indicated numerous opportunities for DHS to make better use of sound practices, such as well-defined requirements. Consistent with our prior work, definition of requirements and performance standards influenced outcomes for the eight complex investments we reviewed. In using a performance-based approach, sound contracting practices dictate that required contract outcomes or requirements be well-defined, providing clear descriptions of results to be achieved. While all eight contracts for these investments had outcome- oriented requirements, the requirements were not always well-defined. Further, contracts for half of the investments did not have a complete set of measurable performance standards. Appendix I provides a summary of our analysis of the requirements, performance standards, and outcomes for the eight performance-based contracts for major investments we reviewed. Complex investments with contracts that did not have well-defined requirements or complete measurable performance standards at the time of contract award or start of work experienced either cost overruns, schedule delays, or did not otherwise meet performance expectations. For example, contracts for systems development for two CBP major investments lacked both well-defined requirements and measurable performance standards prior to the start of work and both experienced poor outcomes. The first, for DHS’s Automated Commercial Environment (ACE) Task Order 23 project—a trade software modernization effort—was originally estimated to cost $52.7 million over a period of approximately 17 months. However, the program lacked stable requirements at contract award and, therefore, could not establish measurable performance standards and valid cost or schedule baselines for assessing contractor performance. Software requirements were added after contract award, contributing to a project cost increase of approximately $21.1 million, or 40 percent, over the original estimate. Because some portions of the work were delayed to better define requirements, the project is not expected to be completed until June 2009—about 26 months later than planned. The second, Project 28 for systems development for CBP’s Secure Border Initiative (SBInet)—a project to help secure a section of the United States- Mexico border using a surveillance system—did not meet expected outcomes due to a lack of both well-defined requirements and measurable performance standards. CBP awarded the Project 28 contract planned as SBInet’s proof of concept and the first increment of the fielded SBInet system before the overall SBInet operational requirements and system specifications were finalized. More than 3 months after Project 28 was awarded, DHS’s Inspector General reported that CBP had not properly defined SBInet’s operational requirements and needed to do so quickly to avoid rework of the contractor’s systems engineering. We found that several performance standards were not clearly defined to isolate the contractor’s performance from that of CBP employees, making it difficult to determine whether any problems were due to the contractor’s system design, CBP employees, or both. As a result, it was not clear how CBP intended to measure compliance with the Project 28 standard for probability of detecting persons attempting to illegally cross the border. Although it did not fully meet user needs and its design will not be used as a basis for future SBInet development, DHS fully accepted the project after an 8-month delay. In addition, DHS officials have stated that much of the Project 28 system will be replaced by new equipment and software. Conversely, we found that contracts with well-defined requirements linked to measurable performance standards delivered results within budget and provided quality service. For example, contracted security services at the San Francisco International Airport for TSA’s Screening Partnership Program had well-defined requirements, and all measurable performance standards corresponded to contract requirements—an improvement from our prior reviews of the program. The requirements for gate, checkpoint, and baggage screening services clearly stated that the contractor should use technology and staff to prevent prohibited items from entering sterile areas of the airport and should work to minimize customer complaints while addressing in a timely manner any complaints received. The performance standards assessed how often screeners could successfully detect test images of prohibited items in checked baggage; the percentage of audited records and inspected equipment, property, and materials that were well-kept, operational, and recorded on maintenance logs; and whether all new hires received the required training before assuming their screening responsibilities. In terms of expected outcomes, the contractor achieved a 2.2 percent cost underrun during the first 5 months of the contract and exceeded most requirements. In managing its service acquisitions, including those that are performance- based, DHS has faced oversight challenges, including a lack of reliable data and systematic management reviews. DHS contracting and program representatives told us that they use a performance-based approach to the maximum extent practicable. However, DHS does not have reliable data— either from the Federal Procurement Data System-Next Generation (FPDS-NG), the governmentwide database for procurement spending, or at a departmentwide level—to systematically monitor or evaluate or report on service acquisitions, including those that are performance-based. Reliable data are essential to overseeing and assessing the implementation of contracting approaches, acquisition outcomes, and making informed management decisions. Moreover, the Chief Procurement Officer (CPO), who has responsibility for departmentwide procurement oversight, has begun some initial review of performance-based service acquisitions, but has not conducted systematic management assessments of this acquisition method. Our analysis of information provided by contracting representatives at the Coast Guard, CBP, Immigration and Customs Enforcement (ICE), and TSA showed that about 51 percent of the 138 contracts we identified in FPDS- NG as performance-based had none of the required performance-based elements: a performance work statement, measurable performance standards, and a method of assessing contractor performance against performance standards. Only 42 of the 138 contracts, or 30 percent, had all of the elements, and about 18 percent had some but not all of the required performance-based acquisition elements (see table 1). Lacking reliable FPDS-NG data, reports on the use of performance-based contracts for eligible service obligations are likely inaccurate. Data reported on the use of performance-based contracts by service types— ranging from basic, such as janitorial and landscaping, to complex, such as information technology or systems development—requested by OFPP in July 2006—are also likely misleading. The Acquisition Advisory Panel and DHS’s CPO also have raised concerns regarding the accuracy of the performance-based designation in FPDS-NG. The Acquisition Advisory Panel’s 2007 report noted from its review at 10 federal agencies that 42 percent of the performance-based contracts the panel reviewed had been incorrectly coded. Inaccurate federal procurement data is a long-standing governmentwide concern. Our prior work and the work of the General Services Administration’s Inspector General have noted issues with the accuracy and completeness of FPDS and FPDS-NG data. OMB has stressed the importance of submitting timely and accurate procurement data to FPDS- NG and issued memos on this topic in August 2004 and March 2007. Accurate FPDS-NG data could facilitate the CPO’s departmentwide oversight of service acquisitions, including those that are performance- based. At a departmentwide level, CPO representatives responsible for procurement oversight indicated that they have not conducted systematic assessments including costs, benefits, and other outcomes of a performance-based approach. To improve the implementation of performance-based acquisitions, CPO representatives established a work group in May 2006 to leverage knowledge among DHS components. They also noted that they are working with OFPP to develop a best practices guide on measurable performance standards and to gather good examples of performance-based contracts. In addition, the CPO has implemented a departmentwide acquisition oversight program, which was designed with the flexibility to address specific procurement issues, such as performance-based service acquisitions, and is based on a series of component-level reviews. Some initial review of performance-based acquisitions has begun under this program, but management assessment or evaluation of the outcomes of this acquisition method has not been conducted. Consistent with federal procurement policy, DHS has emphasized a performance-based approach to improve service acquisition outcomes. However, in keeping with our prior findings, DHS’s designation of a service acquisition as performance-based was not as relevant as the underlying contract conditions. Sound acquisition practices, such as clearly defining requirements and establishing complementary measurable performance standards, are hallmarks of successful service acquisitions. In the cases we reviewed as well as in prior findings where these key elements were lacking, DHS did not always achieve successful acquisition outcomes. The report we are releasing today recommends that the Secretary of Homeland Security take several actions to increase DHS’s ability to achieve improved outcomes for its service acquisitions, including those that are performance-based. These actions include routinely assessing requirements for complex investments to ensure that they are well-defined and developing consistently measurable standards linked to those requirements; systematically evaluating outcomes of major investments and relevant contracting methods; and improving the quality of FPDS-NG data to facilitate identifying and assessing the use of various contracting methods. DHS generally concurred with our recommendations, noting some departmental initiatives under way to improve acquisition management. However, the department’s response did not address how the CPO’s process and organizational changes at the departmental level will impact component-level management and assessment of complex acquisitions to improve outcomes. Improving acquisition management has been an ongoing challenge since the department was established and requires sustained management attention. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the committee may have at this time. For further information about this statement, please contact John P. Hutton at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Staff making key contributions to this statement were Amelia Shachoy, Assistant Director; Jeffrey Hartnett; Sean Seales; Karen Sloan; and Don Springman. Contractor submitted all required documentation on time; met project management quality standards; and maintained electronic archiving and restoration standards. Trade systems software development (task order 23) Costs increased by 40 percent ($21.1 million). More than a year behind schedule; unplanned software redesign. Costs increased by 53 percent ($24 million). Maintenance wait times were longer than planned. DHS rejected initial acceptance of Project 28. The project was delayed 8 months with final acceptance in February 2008. DHS noted that the contractor met the requirements, but the project did not fully meet DHS’s needs and the technology will not be replicated in future SBInet development. Contractor exceeded the performance standard for machine downtime with a score 1 hour less than required and operated at cost through the second quarter of fiscal year 2007. Contractor exceeded most performance standards; for example, threat detection performance and false alarm rates exceeded the quality standards. Contractor had cost underrun of 2.2 percent ($677,000). Initial contractor planning reports were inadequate; system experienced operational downtime; surveillance reports identified poor contractor performance. Contractor generally met time frames and delivered within budget. Outcomes not available at the time of our review. Legend: contract met or mostly met the criteria; contract partially met the criteria; contract did not meet the criteria. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) has relied on service acquisitions to meet its expansive mission. In fiscal year 2006, DHS spent $12.7 billion to procure services. To improve service acquisition outcomes, federal procurement policy establishes a preference for a performance-based approach, which focuses on developing measurable outcomes rather than prescribing how contractors should perform services. This testimony focuses on how contract outcomes are influenced by how well DHS components have defined and developed contract requirements and performance standards, as well as the need for improved assessment and oversight to ensure better acquisition outcomes. GAO's statement is based on its report being released today, which reviewed judgmentally selected contracts for eight major investments at three DHS components--the Coast Guard, Customs and Border Protection (CBP), and the Transportation Security Administration (TSA)-- totaling $1.53 billion in fiscal years 2005 and 2006; prior GAO and DHS Inspector General reviews; management documents and plans; and related data, including 138 additional contracts, primarily for basic services from the Coast Guard, CBP, TSA, and Immigration and Customs Enforcement. Over the past several years, GAO has found that appropriate planning, structuring, and monitoring of agency service acquisitions, including those that are performance-based, can help minimize the risk of cost overruns, delayed delivery, and unacceptably quality. Several prior GAO and DHS Inspector General reviews of major DHS investments using a performance-based approach point to such shortcomings. While all of the contracts GAO reviewed at the Coast Guard, CBP, and TSA had outcome-oriented requirements, contracts for four of the eight investments did not have well-defined requirements, or a complete set of measurable performance standards, or both at the time of contract award or start of work. These service contracts experienced cost overruns, schedule delays, or did not otherwise meet performance expectations. In contrast, contracts for the other four investments had well-defined requirements linked to measurable performance standards and met the standards for contracts that had begun work. In managing its service acquisitions, including those that are performance-based, DHS has faced oversight challenges that have limited its visibility over service acquisitions and its ability to make informed acquisition management decisions. Notably, the department lacks reliable data on performance-based service acquisitions. About half of the 138 contracts identified by DHS as performance-based had none of the elements DHS requires for such contracts: a performance work statement, measurable performance standards, or a quality assurance surveillance plan. Such inaccurate data limit DHS's ability to perform management assessments of these acquisitions. In addition, the Chief Procurement Officer, who is responsible for departmentwide procurement oversight, has not conducted management assessments of performance-based service acquisitions. To help DHS improve outcomes for its service acquisitions, including those that are performance-based, GAO recommended that DHS routinely assess requirements for complex investments to ensure that they are well-defined, and develop consistently measurable performance standards linked to those requirements. GAO also recommended that DHS systematically evaluate the outcomes of major investments and relevant contracting methods and improve the quality of data to facilitate identifying and assessing the use of various contracting methods. DHS generally concurred with GAO's recommendations, noting some departmental initiatives to improve acquisition management.
The Department of Labor oversees a number of employment and training programs administered by state and local workforce boards and one-stop career centers established under the Workforce Investment Act of 1998 (WIA). The green jobs training programs Labor has overseen were created under the Green Jobs Act of 2007, which amended WIA. The Green Jobs Act of 2007 was passed as part of the Energy Independence which was intended to move the United and Security Act of 2007 (EISA),States toward greater energy independence and security and to increase the production of clean renewable fuels, among other objectives. This act directed the Secretary of Labor to work in consultation with the Secretary of Energy to create a new worker training program to prepare workers for careers in the energy efficiency and renewable energy industries. However, funds for these programs were not appropriated until the passage of the Recovery Act in 2009, according to Labor officials. The Recovery Act appropriated $500 million in funding for competitive green jobs grant programs at Labor. The current administration presented the green jobs training grant program as part of a broad national strategy both to create new jobs and to reform how Americans create and consume energy. Specifically, the administration articulated a vision for federal investments in renewable energy to involve coordination across a number of federal agencies to create new, well-paying jobs for Americans and to make such jobs available to all workers. The Employment and Training Administration (ETA) was responsible for overseeing the implementation of the green jobs training programs that were authorized in the Green Jobs Act of 2007 and funded through the Recovery Act. In June 2009, ETA announced a series of five Recovery Act grant competitions related to green jobs, three of which were primarily focused on training. All of these programs are scheduled to end before the end of July 2013. Table 1 describes these five programs and identifies the types of organizations eligible to receive each grant. Between September 2010 and October 2012, Labor’s OIG issued a series of three reports related to the department’s Recovery Act green jobs programs, including training programs. The most recent report raised questions about the low job placement and retention of trained program participants, the short amount of time for which many participants received training, and limitations of available employment and retention data, among other things. Labor has used a broad framework to define green jobs, incorporating various elements that have emerged over time as the understanding of what constitutes a green job has evolved. As part of the Green Jobs Act of 2007, WIA was amended to identify seven energy efficiency and renewable energy industries targeted for green jobs training funds. In addition, beginning in 2009, Labor issued information on 12 emerging green sectors as part of a broader effort to describe how the green economy was redefining traditional jobs and the skills required to carry Most recently, in 2010, the Bureau of Labor Statistics out those jobs.(BLS) released a two-part definition of green jobs that was used to count the number of jobs that could be considered green either because what the work produced or how the work was performed benefitted the environment.time to define green jobs. According to funding information provided by Labor and our survey of Labor’s directly-funded green jobs efforts, most funding for green jobs efforts at Labor has been directed toward programs designed to train individuals for green jobs, with less funding supporting efforts with other objectives, such as data collection or information materials. Indeed, approximately $501 million (84 percent) of the $595 million identified by offices at Labor as having been appropriated or allocated specifically for green jobs activities since 2009 went toward efforts with training and support services as their primary objective.million, or 12 percent of the total amount of funding for green jobs In total, approximately $73 activities, was reported appropriated or allocated for data collection and reporting efforts. Most of the funding for green jobs efforts was provided through the Recovery Act, which funded both training and non-training-focused projects at Labor in part to increase energy efficiency and the use of renewable energy sources nationwide. In addition to Recovery Act funding for green jobs efforts, funding information provided by Labor and through our survey of directly-funded green jobs efforts indicate that Labor has allocated at least an additional $89 million since 2009 to support seven other green jobs efforts that have been implemented by five of Labor’s offices (see fig. 2). For a brief description of each of Labor’s green jobs efforts for which funds were appropriated or allocated, see appendix II. The Recovery Act directed federal agencies to spend the funds it made available quickly and prudently, and Labor implemented a number of relatively brief but high-investment green jobs efforts simultaneously. As a result, in some cases, Recovery Act training programs were initiated prior to a full assessment of the demand for green jobs. Specifically, Recovery Act-funded green jobs training grantees designed and began to implement their green jobs training programs at the same time states were developing green job definitions and beginning to collect workforce and labor market information on the prevalence and likely growth of green jobs through the State Labor Market Information Improvement grants, which were also funded with Recovery Act funds. Furthermore, BLS launched its Green Jobs initiative—which included various surveys designed to help define and measure the prevalence of green jobs—after many green jobs training programs had begun. ETA officials noted that BLS’s development of the definition of green jobs was a deliberative and extensive process that required consulting stakeholders and the public. They also said that BLS’s timeline for defining green jobs differed from ETA’s timeline for awarding and executing grants, which was driven by Recovery Act mandates. Labor has made subsequent investments that build upon lessons learned through the Recovery Act grant programs. For example, ETA initiated the $38 million GJIF program in 2011 to support job training opportunities for workers in green industry sectors and occupations. In developing the GJIF grant program, ETA considered lessons learned through the Recovery Act grant programs. For example, various stakeholders including employers, the public workforce system, federal agencies, and foundations identified Registered Apprenticeship—training that combines job related technical instruction with structured on-the-job learning experiences for skilled trades and allows participants to earn wages—as a valuable workforce strategy. ETA acknowledged that upgrading basic skills, including literacy and math, is critical to ensure job placement and suggested that training participants exclusively in green skills is not always sufficient. Consequently, ETA required GJIF grantees to implement green jobs training programs that would either forge linkages between Registered Apprenticeship and pre-apprenticeship programs or deliver integrated basic skills and occupational training through community-based organizations. Figure 3 shows a timeline illustrating the rollout of selected green jobs grants at Labor and the time periods during as well as the timing of BLS’s efforts to which these grants were active,collect data on green jobs. With the exception of the GJIF grants, all of these efforts will have been completed by July 2013. Grantees from all but six states received at least one of the 103 green jobs training grants that were awarded by ETA,, but grantees were somewhat concentrated within certain regions of the country. Specifically, most states with three or more grantees were located in the Northeast, West, or Midwest regions of the country. Four states and the District of Columbia received five or more green jobs training grants: California, Michigan, New York, and Pennsylvania. Figure 4 shows the number of green jobs training grants awarded by state. In terms of organizational type, most green jobs training grants were awarded to nonprofit organizations and state workforce agencies (see fig. 5). Specifically, 44 percent of green jobs training grants were awarded to nonprofit organizations and 34 percent were awarded to state governmental agencies or departments. In addition, 10 percent of grantees were organized labor or labor management organizations. ETA officials from all six of its regional offices said that in terms of organizational type, green jobs training grantees did not differ substantially from the types of grantees ETA typically oversees. ETA officials said building partnerships had been an important focus of the green jobs grants, and indeed ETA’s grant solicitations required, or in some cases encouraged, grant recipients, regardless of organizational type, to develop partnerships with various stakeholders, such as representatives of the workforce system, industry groups, employers, unions, the education and training community, nonprofits, or community- based organizations. Staff from ETA’s regional offices said that some grantees developed new and successful partnerships as a result of the grants, including partnerships with labor unions. More than half of ETA’s green jobs training grantees implemented their grants through sub-grantees, or a network of local affiliates, rather than providing training services directly to participants. Grantees that contract with sub-grantees or local affiliates to provide services are responsible for monitoring and overseeing how all grant funds are used, effectively delegating day-to-day oversight responsibility from Labor to the primary grantee. In addition to Labor’s direct investments in green jobs, several offices at Labor have infused green elements into their ongoing activities even though funds were not specifically appropriated or allocated for these green jobs efforts. In total, of the 14 Labor offices we surveyed, 6 identified and implemented 48 such efforts (for a list of the efforts, see appendix III). Some of these offices added a “layer of green” to existing training programs or other activities. For example, according to material provided by Labor, most YouthBuild programs have incorporated green building into their construction training. Other efforts focused on providing information materials, forming partnerships, or conducting publicity and outreach, among other things. For example, the Women’s Bureau created a guide on sustainable careers for women and Labor’s Occupational Safety and Health Administration contributed to an Environmental Protection Agency publication on best practices for improving indoor air quality during home energy upgrades. Further, in 2010 the Center for Faith-Based and Neighborhood Partnerships hosted a roundtable discussion about green jobs between the Secretary of Labor and leaders from national foundations and discussed how to create employment opportunities for low-income populations in the green jobs industry. Although funding for green jobs efforts at Labor has shifted and green jobs efforts funded through the Recovery Act are winding down, a few of Labor’s ongoing programs or efforts continue to emphasize green jobs or skills, and Labor continues to incorporate green elements into existing programs by coordinating internally on an as-needed basis. After the passage of the Recovery Act, a number of Labor’s offices worked together to implement the requirements of the act, and Labor officials said that they collaborated on green jobs efforts on a fairly regular basis and that more formal green jobs meetings across the department were common. For those green jobs efforts where green elements have been infused into ongoing activities even though funds were not specifically appropriated or allocated for green jobs efforts, offices at Labor indicated through our survey that they continue to coordinate on such efforts within Labor and across other federal agencies, albeit in a less formal manner. For example, according to our survey of these indirectly-funded green jobs efforts, for 37 of 46 of the efforts listed in appendix III, offices said that they coordinated with others at Labor, and for 30 of 46 of the efforts, In addition, it is they reported coordinating with other federal agencies.likely that coordination on green jobs efforts will continue to occur on an ad-hoc basis, especially as funding and priorities within the department shift. For example, Labor recently reported that due to federal budget cuts, BLS has discontinued its reporting on employment in green jobs. According to a Labor official, after the Recovery Act was passed, Labor collaborated with other departments, such as the Department of Energy (Energy) and the Department of Housing and Urban Development (HUD) to foster job growth for a new green economy. For example, Labor’s Occupational Safety and Health Administration worked with Energy on retrofitting and safety activities, and Labor also partnered with HUD to provide green jobs training and possible employment opportunities to public housing residents. In addition, Labor entered into various Memorandums of Understanding (MOU) after the Recovery Act was passed to collaborate on green jobs-related issues with other federal agencies. For example, the Secretaries of Energy, Labor, and the Department of Education announced a collaboration to connect jobs to training programs and career pathways and to make cross-agency communication a priority. While these examples highlight coordination on green jobs efforts after the passage of the Recovery Act, little is known about the effectiveness of these efforts. To identify the potential demand for green jobs in their communities, all (11 of 11) grantees we interviewed had broadly interpreted Labor’s green jobs definitional framework to include as green any job that could be linked, directly or indirectly, to a beneficial environmental outcome. While Labor created its framework to provide local flexibility, the wide variation in the types of green jobs obtained by program participants illustrates just how broadly Labor’s definition can be interpreted and raises questions about what constitutes a green job—especially in cases where the job essentially takes the form of a more traditional job (see table 2). In general, grantees we interviewed considered jobs green if they could link the job to (1) a green industry, (2) the production or installation of goods that benefit the environment, (3) the performance of services that potentially lead to environmental benefits, or (4) environmentally beneficial work processes. For example, in some cases, grantees we interviewed considered jobs green because they were linked to the renewable energy industry, such as solar panel installation or sales. Grantees considered other jobs green because the goods being produced benefited the environment, such as the pouring of concrete for a wind turbine or the installation of energy efficient appliances. In some cases the green job was service-based, such as an energy auditor or energy surveyor. Finally, other grantees considered jobs green because of the environmentally beneficial processes being used, such as applying paint in an efficient manner or using advanced manufacturing techniques that reduce waste. Even for jobs where parts of the work have a link to environmentally beneficial outcomes, workers may only use green skills or practices for a portion of the time they work. For instance, technicians trained to install and repair high-efficiency heating, ventilation, and air conditioning (HVAC) systems may in the course of their work also install less energy efficient equipment. All grantees we interviewed said they had worked closely with local employers to align their training program with the green skills needs of local employers. All agreed developing effective relationships with employers was crucial to aligning any training program with available jobs. Labor’s three Recovery Act green jobs training programs, as well as the GJIF program, all required applicants to demonstrate how they would partner with local employers to develop and implement their training programs. Most (9) grantees told us they had assembled advisory boards consisting of representatives from local businesses and industry associations to help inform them about available green jobs and the skills that would most likely be in demand by local employers. Further, all grantees said they engaged in ongoing communication with employers to stay abreast of changes in the local economy and employer needs, and most (10) made changes to their program curricula or tailored their training in response to employer input. Labor’s data show that green jobs training grantees primarily offered training in the construction and manufacturing industries. Specifically, nearly half of all participants of the Recovery Act-funded green jobs training programs received training focused on construction, and approximately 15 percent received training in manufacturing. Over 5 percent of participants in those programs received training in other industries that included utilities, transportation, and warehousing. Grantees in Labor’s newer GJIF program focused even more heavily on construction—approximately 94 percent of participants were trained in construction and around 3 percent in manufacturing. Most grantees (9) we spoke to had infused green elements into existing training curricula for more traditional skills. However, the extent to which the training focused on green versus traditional skills varied across programs and often depended upon the skill level of targeted participants. Most (7) of the programs we visited generally targeted relatively low- skilled individuals with limited work experience and were designed to teach participants the foundational skills they would need to pursue a career in a skilled trade in which green skills and materials can be used. For example, those programs typically used their green job grant funds to incorporate green skills into existing construction, carpentry, heating/air- conditioning, plumbing, or electricity programs. The programs generally involved a mixture of classroom and hands-on training and taught traditional skills, such as how to read blueprints, use tools, install and service appliances, and frame buildings. In teaching these skills, however, instructors also showed students the way the processes or products used in performing these tasks could lead to environmentally beneficial outcomes. For example, participants were taught various ways to weatherize a building to conserve energy, to efficiently operate heavy machines to save fuel, or to install solar panels as part of a green construction project. In contrast, two programs we visited focused more exclusively on short- term green skills training to supplement the existing traditional skills of relatively higher-skilled unemployed workers. For example, one green awareness program taught participants to identify ways to perform their work, such as manufacturing, in a more environmentally beneficial manner, often by identifying and reducing waste. Another program added a component to their comprehensive electrical training program to train unemployed registered electricians how to install and maintain advanced energy efficient lighting systems. The grantees associated with both of these programs, as well as other grantees, noted that employer demand for workers with green skills may sometimes be most effectively met through short-term training of higher-skilled unemployed workers or incumbent workers. The overall impact of Labor’s green jobs training programs remains largely uncertain partly because some individuals are still participating in training and are not expected to have outcomes yet, and because final outcome data are submitted to Labor approximately 3 months after the grant period ends. The most recent performance outcome data for the three Recovery Act-funded and GJIF green jobs grants are as of December 31, 2012, at which time approximately 60 percent of the Recovery Act-funded programs had ended and grantees had submitted final performance outcome data. According to Labor officials, complete outcome data for the remaining Recovery Act-funded green jobs grantees will likely not be available until October 2013 because many grants were extended to June 2013. They also said that final performance outcome data for the GJIF grant—which is scheduled to end in June 2014—will likely not be available until October 2014. Our analysis of data reported by Recovery Act-funded green jobs grantees with final outcome data shows that these grantees collectively reported enrolling and training more participants than they had proposed when setting their outcome targets. However, their placement of program participants into employment lagged in comparison—these grantees reported placing 55 percent of the projected number of participants into jobs. When final data become available for the remaining 40 percent of grantees, the final figure comparing reported employment outcomes to proposed targets may change.employment outcomes will compare to their projected targets, and whether the employment outcomes of this program will benefit either from economic changes or lessons learned since the Recovery Act programs began. Moreover, it remains to be seen how GJIF grantees’ Developing a complete and accurate assessment of Labor’s green jobs training programs is further challenged by the potential unreliability of certain outcome data—particularly for placement into training-related employment. In its October 2012 report, Labor’s OIG questioned the reliability of the Recovery Act green jobs training programs’ employment and retention outcome data because a significant proportion of sampled data for employment and retention outcomes were not adequately supported by grantee documentation. We reviewed the OIG’s data review process and found it appropriate for assessing reliability and therefore also consider the data unreliable for evaluating program While outcome data for the ongoing GJIF program are still performance.being reported and the OIG did not assess the reliability of this program’s data, Labor’s method for collecting these data remains largely unchanged from that used for the Recovery Act-funded green jobs training programs. Consequently, these outcome data—particularly for placement into training-related employment—could also be questionable. Labor officials noted that they have been collecting additional information on employment outcomes and wages using state unemployment insurance (UI) wage record data on program participants, and will continue to do so into early 2015 for the GJIF program. Results of their most recent analyses of UI data showed that, of the participants who had exited at least one of the three Recovery Act-funded green jobs training programs between April 1, 2011, and March 31, 2012, 52 percent had Similar analyses provided by Labor showed that, obtained employment.of participants who had exited between October 1, 2010, and September 30, 2011, 83 percent of those who had become employed had retained their employment for at least 6 months and had average earnings of around $25,000 for the 6 month period. Results of Labor’s analysis of UI wage data for participants of the GJIF program shows that 40 percent of participants who had exited between April 1, 2011, and March 31, 2012, had entered employment. However, the UI data do not capture whether jobs obtained were training-related for either the Recovery Act-funded or GJIF programs, so, absent additional relevant information, the extent to which grantees placed participants into training-related employment may never be reliably known. According to Labor officials, once complete, these additional UI wage data may provide more definitive information on the extent to which program participants entered employment and will be used by the department to develop a broader picture of the grant programs’ level of success in achieving employment outcomes. Specifically, Labor officials said that while there is not a formal process to study the UI data, program staff routinely examine these data to identify lessons learned and best practices that could be applied to future grant programs. Labor officials said the data could be used to compare the green jobs training programs against other training programs across the agency, such as those under WIA, if resources permit. While Labor officials consider the UI data to be more definitive than the grantee-reported job placement data to measure overall program outcomes once the grant period ends, they stressed the importance of having real-time data to monitor grantee performance during implementation. While the UI wage record data provide an alternative source of information on job placement outcomes, due to a 9-month lag time, these data are of limited usefulness regarding program management. Specifically, because of the time lag, grantees could not use these data to monitor their progress toward meeting program goals in real-time. Further, Labor could not use the data to hold grantees accountable for meeting grant goals, as all grant periods will have ended before the data are complete. Consequently, ensuring the reliability of grantee reported outcome data remains vitally important, particularly for grant programs whose primary objective is to prepare workers for attaining employment in a targeted emerging industry. The grantees we interviewed were generally positive about Labor’s green jobs training programs, with most speaking optimistically about the potential value of the green skills obtained by the program participants. Most grantees we met with said that they believe there to be a continued national movement towards lowering energy usage—whether due to economic, policy, or cultural changes—and all projected that the demand for workers with green skills credentials will continue to rise. All (11 of 11) were of the opinion that possessing green skills in addition to more traditional skills provides workers with an advantage as they seek a new job or move along a career pathway, and most (10) cited the need for training programs that provide nationally or industry-recognized green credentials. Two noted that having multiple credentials was particularly valuable. Lastly, some (5) grantees mentioned that the benefits of the green jobs training, like most job training, may not become apparent immediately, but may often be realized later during the worker’s career, especially as demand for green skills grows. However, all grantees noted there have been challenges associated with developing and implementing Labor’s green jobs training programs. For example, most (8) of the grantees we interviewed said that the lack of credible green jobs labor market information had limited their ability to identify or predict the level of available green jobs or the demand for green skills in their local area. Although state workforce agencies received funding to conduct green jobs labor market information studies under the State Labor Market Information Improvement grants, most resulting data were issued after many Recovery Act training programs had already begun. In addition, the BLS surveys were released from March 2012 through March 2013, after GJIF grantees had submitted their applications outlining their training programs to Labor. Having access to the final results of the state labor market information studies could have provided Recovery Act grantees with additional insights into their state’s economic activity in the energy efficiency and renewable energy industries, as well as jobs within those industries when they were developing their training programs. Similarly, BLS survey results could have provided GJIF grantees with a national snapshot of establishments that produce green goods and services and the jobs of workers involved in green activities, among other information, and may have provided grantees with additional context for the development and implementation of their green jobs training programs. Labor officials said the rapidly evolving nature of the green industries has resulted in multiple changes to employer green job demand information over the course of the grant periods, further complicating their attempts to provide labor market information for this sector. In addition, most (9) grantees we met with said Labor’s green jobs training grants did not afford them enough time to both develop local partnerships and recruit, train, and place program participants. All grantees said developing partnerships can be especially time-consuming if such partnerships had not existed prior to the grant award.noted that given how important local partnerships are to developing successful training programs, training programs that require such partnerships should have longer grant periods than those afforded by the Recovery Act and GJIF programs. Most (9) Furthermore, most (8) grantees mentioned how developing and implementing a relatively new type of training, like green skills, can require additional time in order to fill knowledge gaps among employers. This may be especially true in light of changing state and local energy policies. For example, according to the Department of Energy, as of March 2013, 29 states have established standards aimed at generating a certain percentage of the state’s energy using renewable sources by a specified year. Furthermore, many municipalities throughout the country are requiring that local construction projects adhere to environmentally friendly requirements. However, most (9) grantees we spoke with said some employers may not recognize how changing policies will affect their businesses. In fact, they believe this lack of understanding may be limiting demand for workers trained in green skills. To address this problem, one of the grantees we interviewed had developed a 1-day training program for local business managers to educate them about how they could benefit from the green skills that participants were obtaining through the organization’s training program. Most (6) grantees said at times during the implementation of their green jobs training program, they were, in effect, attempting to simultaneously drive both supply and demand for workers with green skills, which took considerable time and effort. In addition, although all grantees we interviewed had engaged with employers who had committed their support for the training curriculum, they also said this did not always translate into green jobs for program participants. Above all, most (9) pointed to the slow economic recovery as the reason their predictions—and those made by employers—regarding green job growth were not fully realized. For example, one grantee explained how the local housing market had not recovered as quickly as anticipated, and as a result, demand for workers with green skills—such as green construction techniques, weatherization practices, and the installation of energy efficient appliances—has been sluggish. In addition, most (10) grantees explained that because green skills are often intertwined with traditional skills training and the skilled labor industries, their programs’ participants were negatively affected by the overall poor economy. For example, most grantees (9) noted how their program participants, despite their additional layer of green skills training, found themselves competing with a high number of unemployed workers who were also seeking to regain employment in more traditional jobs such as carpentry or electrical work. Most (9) grantees also noted that renewable energy sectors, such as solar power, have not grown in their regions as was predicted several years ago. Lastly, most (7) grantees we interviewed said it is difficult to accurately measure the value of green skills training in terms of green job placement. In general, they said this is partly because, unlike jobs in other growing industries, like health care, there are few distinctly green jobs. One grantee we met with said she believes the term “green job” is misleading, and complicates program implementation. This grant official said that funding should be directed toward supplementing traditional skills training with green skills that can be used on any job rather than on preparing workers for specific jobs identified as green. Based on our interviews with grantees of Labor’s green jobs training programs, and the descriptions of their experiences implementing those programs, we identified several lessons learned that may warrant consideration when implementing similar targeted grant programs for other emerging industries (see table 3). Labor has provided all green jobs grantees with technical assistance to help them implement their grant programs and comply with relevant federal laws and regulations. For example, Labor officials have hosted technical assistance webinars on topics such as financial management and how to engage employers. Labor also maintains a website for each green jobs training grant program and a green jobs community of practice on its online platform, Workforce3One. In addition, Labor has published bimonthly digests for Recovery Act grantees since January 2011 that highlight new technical assistance materials and other grant-related information. Finally, Labor has compiled and periodically updated a technical assistance guide that briefly describes and provides hyperlinks for its technical assistance resources, including webinar recordings and promising practices. Several grantees we interviewed (4 of 11) reported participating in webinars and referring to technical assistance materials posted to Workforce3One. In addition, ETA has funded three separate studies to assess the implementation of selected green jobs programs funded by the Recovery Act. Specifically, Labor funded a 2-year implementation evaluation that examined the implementation of the three Recovery Act-funded green jobs training programs and issued both interim and final reports. , Labor also funded an evaluation of the State Labor Market Information Improvement grants and issued a final report and additional related products in 2013. Finally, Labor has funded an ongoing impact evaluation scheduled to be completed in 2016. This study was designed to test the extent to which selected grantees of one of the four green-jobs training programs overseen by ETA—Pathways Out of Poverty— improved worker outcomes by imparting skills and training valued in the labor market. To support its technical assistance efforts to grantees, Labor entered into a grant agreement with the National Governors Association, which together with two partner organizations formed a Technical Assistance Partnership (TA Partnership). In conjunction with Labor officials, the TA Partnership has facilitated monthly conference calls for each grant program so grantees can learn from their peers and receive program- specific technical assistance. The TA Partnership has also compiled and updated reports that highlight promising practices grantees have implemented. Finally, the TA Partnership and Labor officials have held annual grantee conferences, which have covered various topics including strategies to retain and place program participants and the importance of nationally recognized credentials. Several (4 of 11) grantees we interviewed mentioned participating in the monthly conference calls and annual conferences and said that generally they had been helpful. While Labor provided guidance and technical assistance on how to document eligibility for the green jobs training programs, it provided little guidance on what documentation grantees were expected to maintain regarding program outcomes, particularly with respect to job placement. Specifically, while Labor provided guidance on how to report required performance data into its Recovery Act Database, this guidance does not specify what documentation, if any, grantees were to maintain for reported job placements, including those considered training-related. Our Standards for Internal Control in the Federal Government provides that internal control and all transactions and other significant events should be clearly documented and readily available for examination. However, in its last green jobs report, the OIG found that nearly a quarter of reported outcomes were not supported by adequate documentation. One regional official noted that sub-grantees may not have known what documentation was required and staff in another office said that in some cases primary grantees may not have done enough to ensure that the sub-grantees they were responsible for overseeing understood documentation requirements. While Labor officials have not issued additional guidance to GJIF grantees regarding how to document job placement and retention outcomes, they said they have taken other steps that address the OIG’s recommendation to improve the quality of grantee reported performance data and utilize lessons learned from Recovery Act-funded green jobs training programs for other discretionary grant programs. First, ETA officials noted that they have formed an internal workgroup focused on improving the technical assistance provided to ETA’s discretionary grantees about how to report program outcomes. This group hopes to issue recommendations in September 2013, and ETA officials believe these recommendations will help improve grant application instructions, and help ETA refine their reporting systems, among other things. Second, ETA officials told us that they had initiated a grant re-engineering project in August 2012 to identify common grant management challenges and develop strategies for addressing such challenges. For instance, the group has discussed ways to improve ETA’s grant solicitation process, such as by including clearer expectations and benchmarks for performance in its solicitations for grant applications and by taking steps to ensure greater comparability of goals across grantees. Labor hopes to begin implementing the group’s recommendations for new discretionary grant programs in August 2013. ETA monitors most grants, including its green jobs training grants, through a risk-based strategy that prioritizes monitoring activities based upon grantees’ assessed risk-levels and availability of resources, among other factors, and is described in its Core Monitoring Guide. Specifically, according to officials from all six of ETA’s regional offices, ETA’s federal project officers monitor grantees as part of their ongoing duties, which include calling grantees to offer technical assistance. In addition, ETA’s federal project officers perform quarterly desk reviews, during which they review financial reports and quarterly performance reports that grantees are required to submit. For the green jobs training grants, these reports include information such as the total amount of grant funds spent, the number of participants who began or completed training, a timeline for grant activities and deliverables, grantee accomplishments, and technical assistance needs. During these quarterly reviews, federal project officers compare grantees’ reported performance outcomes and spending rates to those goals set by grantees in their grant proposals. Based upon their review of each grantee’s reported information, federal project officers enter information about each grantee into Labor’s Grant Electronic Management System (GEMS), which assesses risk and generates a risk level for each grantee. The GEMS assessment of each grantee’s risk level is then used by Labor to develop its risk-based monitoring strategy, which involves prioritizing site visits based on grantees’ assessed risk-levels and availability of resources, among other factors. According to regional officials from all six offices, nearly all green jobs training grantees received at least one on-site monitoring visit, typically about halfway through the period of performance. During these site visits, federal project officers assessed grantees’ management and performance and documented any noncompliance findings and requirements for corrective action, as necessary. For example, Labor’s site visit guide includes questions for federal project officers to consider about financial and performance data reporting systems and performance outcomes. As a result of its on-site monitoring activities, Labor officials identified and required certain grantees to correct a variety of issues concerning the management of their grants. Many monitoring reports for the Recovery Act-funded green jobs training grants indicated that grantees were not on track to meet their performance outcomes. In such cases Labor required grantees to submit written corrective action plans that described what strategies they would undertake to increase project outcomes and how they would ensure that remaining funds would be used in a timely way to accomplish project objectives. Labor officials said that grantees have made significant progress toward attaining their goals for beginning and completing training as a result of both the grantees’ own efforts and ETA’s technical assistance and monitoring efforts. These officials also stressed that while ETA holds grantees accountable to adhering to their grant statements of work, grantees are not contractually obligated to meet performance outcomes.Unlike contracts or WIA-funded programs, which can impose sanctions for failing to meet projected targets, the accountability mechanisms for these green jobs grant programs were more limited. For example, ETA officials said that if a grantee does not achieve its placement outcomes, this can affect whether the grantee receives a period of performance extension for the current grant or, potentially, a future grant from ETA. Officials said that they had not withdrawn funding from any grantees for failing to meet performance targets for any of the four green jobs training programs. However, in some cases ETA officials decided not to grant extension requests for grantees reporting poor performance. As a result, some grant funds remained unexpended and will be returned to the Treasury, as required. In addition to insufficient progress toward targeted outcomes, the monitoring reports of the Recovery Act-funded green jobs training grantees identified other noncompliance findings, including insufficient monitoring of sub-grantees. For example, a number of monitoring reports indicated that primary grantees had not sufficiently monitored their sub- grantees. These findings are notable given that such a large percentage of grantees implemented their programs through a network of sub- grantees. Both GAO and the Department of Justice’s OIG have stressed the importance of sufficient sub-recipient monitoring to the grant oversight process. Other noncompliance findings included grantees lacking adequate documentation to show program participants were eligible for services or grantees having failed to follow acceptable procurement processes. According to officials from all six regional offices, federal project officers did not identify any instances of fraud, waste, or abuse during their on-site monitoring visits. The Recovery Act funded multiple, substantial investments in training programs targeted to a specific emerging industry—energy efficiency and renewable energy. Most of these programs have already ended or are currently winding down, although a few of Labor’s continuing programs, such as YouthBuild, have incorporated many green elements since 2009, and the Green Job Innovation Fund program is scheduled to remain active through June of 2014. Despite the sizeable investment in green jobs, the green jobs training programs have faced a number of implementation challenges and final outcomes remain uncertain, particularly regarding placement into green jobs. A number of these challenges have stemmed from the need to implement the grants quickly and simultaneously before green jobs had been defined and more had been learned about the demand for green skills. Others, such as problems with the reliability of outcome data, can be traced to management issues that have compromised Labor’s ability to measure the program’s success, particularly regarding placing participants into training-related employment. Specifically, because Labor did not establish clear and timely guidelines for how to document green job placement outcomes, Labor is not able to assess the extent to which the targeted green jobs training programs placed participants in employment related to the training they received. The challenges for an emerging industry such as energy efficiency and renewable energy are substantial. Uncertainty and debate still surround the question of what constitutes a green job. Under Labor’s current framework, almost any job can be considered green if a link between the employee’s tasks and environmental benefits can be made. Indeed, most grantee officials we interviewed said that most green jobs they have trained participants for are primarily traditional skilled-trades jobs, such as carpentry or electrical work. Many have been termed “green” because the worker has been trained to be mindful of energy use and reduce waste, or has been placed where the worker’s tasks resulted in a product or service that benefited the environment, such as a light-rail construction site. Such an approach provides certain benefits within the context of an emerging industry, in that many of the skills workers obtain can be transferred to traditional jobs in cases where local demand for green jobs falls below expectations. It also may serve to raise general worker awareness about energy efficiency and waste reduction, to the benefit of the employer or nation. Nonetheless, this emphasis on training that often takes the form of traditional skills training with an added layer of green may not fully align with the intent of the targeted training funds. By funding several evaluations of green jobs training and labor market information programs, Labor has positioned itself to build upon lessons learned through implementing these individual programs. A fundamental consideration is whether it is prudent to implement job training programs for an emerging industry before more is known about the demand for skills and workers. Another consideration is whether it would be more or less effective for federally-funded training programs to focus on providing valuable green skills and credentials applicable on a wide variety of jobs, rather than to devote considerable attention to what is defined as a green job. Even though Labor is scaling back its own green jobs efforts, energy efficiency and renewable energy will likely remain a national priority. Labor has established a green jobs community of practice on its online platform, Workforce3One, which, if maintained and used, can continue to facilitate information-sharing among grantees and workforce professionals regarding what green skills and credentials employers in their communities value most. In addition, the substantial investment in energy efficiency and renewable energy made through these grant programs also provides Labor an opportunity to identify broader lessons learned about the challenges and benefits associated with offering targeted training in an emerging industry, which could help inform the development of training for other emerging industries in the future. Without the benefit of such lessons learned and a continued focus on what is needed to address emerging industries, state and local workforce entities may grapple with similar challenges in the future. To enhance Labor’s ability to implement training programs in emerging industries, GAO recommends that the Secretary of Labor identify lessons learned from implementing the green jobs training programs. This could include: Identifying challenges and promising strategies associated with training workers for emerging industries—through both targeted grant programs and existing programs—and considering ways to improve such efforts in the future. For example, taking a more measured or multi-phased approach could allow the time necessary to better determine demand for an emerging industry and establish the partnerships needed to properly align training with available jobs. Taking steps to ensure training programs adequately document outcome variables, particularly for targeted programs where tracking training relatedness is of particular interest. We provided a draft of this report to the Department of Labor. Labor provided a written response (see app. IV). Labor agreed with our recommendation. Specifically, Labor’s response noted that the department has already begun assessing lessons learned from the implementation of its green jobs grants. Labor also cited efforts to compile lessons learned to inform the design and implementation of future grant initiatives, including new approaches to capture program outcomes. Labor agreed that documenting outcomes is important and said it will work to provide technical assistance to ensure grantees adequately document outcomes. Finally, Labor noted the department will continue to collect information on employment outcomes and wages and will analyze these data once they are complete to provide a more definitive and final picture of the extent to which former green jobs training participants entered and retained employment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, the Committee on Homeland Security and Governmental Affairs, the Committee on Oversight and Government Reform, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to determine: (1) what is known about the objectives and coordination of the Department of Labor’s (Labor) green jobs efforts, (2) what type of green jobs training grantees provided and how selected grantees aligned their training to meet employers’ green jobs needs, (3) what is known about program outcomes and what challenges, if any, grantees faced in implementing their programs, and (4) what Labor has done to assist and monitor its green jobs grantees. To address these objectives, we reviewed relevant federal laws, regulations, and departmental guidance and procedures. We also created a data collection instrument and two questionnaires to obtain information from Labor officials. In addition, we analyzed data from Labor and interviewed selected grantees by phone or in person in five states—California, Illinois, Louisiana, Minnesota, and Pennsylvania—as well as Labor officials. We conducted this performance audit from May 2012 through June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our data collection strategy for obtaining information on green jobs efforts across Labor consisted of two phases. First, we created a data collection instrument to obtain information on green jobs efforts across Labor. In the data collection instrument, we asked offices at Labor to list two separate sets of efforts: (1) efforts where federal funds were appropriated or allocated specifically for green jobs activities and (2) efforts where federal funds were not specifically appropriated or allocated for green jobs activities, but where the office sought to incorporate green elements into either an existing program or ongoing activity. We distributed the data collection instrument to 14 of Labor’s 28 offices: Occupational Safety and Health Administration (OSHA), Mine Safety and Health Administration (MSHA), Women’s Bureau (WB), Employment and Training Administration (ETA), Veterans’ Employment and Training Services (VETS), Office of the Assistant Secretary for Policy (OASP), Bureau of International Labor Affairs (ILAB), Bureau of Labor Statistics (BLS), Center for Faith-Based and Neighborhood Partnerships (CFBNP), Office of Federal Contract Compliance Programs (OFCCP), Wage and Hour Division (WHD), Office of Workers’ Compensation Programs (OWCP), Office of Disability Employment Policy (ODEP), and the Office of Public Affairs (OPA). These 14 offices were selected based on the likelihood of their administering a green jobs effort or program. For example, we did not distribute the data collection instrument to Labor’s Office of Inspector General, Office of the Solicitor, or Office of the Chief Financial Officer. Second, we used the information we collected on the two separate sets of green jobs efforts in the data collection instruments to inform two follow- up questionnaires. For the first set of green jobs efforts, offices at Labor initially identified 16 efforts where funds were specifically appropriated or allocated for green-job related activities. For each of the 16 efforts, we sent a questionnaire by e-mail. The questionnaire focused on (1) the goals and objectives of the green jobs efforts, (2) how green jobs were defined for each of the efforts, (3) whether offices coordinated with others on these efforts, and (4) funding levels for each of the efforts. We pre- tested the questionnaire with two respondents from OSHA in December and made revisions. We then sent the questionnaires out on a rolling basis between January 16 and February 22, 2013. We determined 2 of the 16 efforts to be out of scope. Of the remaining 14 directly-funded green jobs efforts across five offices (OSHA, ETA, VETS, ILAB, and BLS), we received completed questionnaires for 13 and one partially completed questionnaire by April 3, 2013. We also identified 3 additional directly-funded efforts, for a total of 17 efforts. For the second set of green jobs efforts, offices at Labor initially identified 54 efforts where funds were not specifically appropriated or allocated for green jobs efforts, but green elements were incorporated into existing programs or ongoing activities. We identified two additional green efforts that fall under this category. We sent a brief questionnaire consisting of two questions by e-mail in an attached Microsoft Word form. The two questions included in the questionnaire were pre-tested as part of the more detailed survey mentioned above. All questionnaires were sent on January 29, 2013, or on February 22, 2013. We determined 10 of the 56 efforts to be out of scope. Of the remaining 46 efforts across six offices (OSHA, WB, ETA, VETS, ILAB, and CFBNP), we received completed questionnaires for all 46 efforts by March 21, 2013. Labor later identified 2 additional efforts, for a total of 48 efforts. Because the majority of Recovery Act funding for green jobs efforts were directed toward training programs, we focused much of our review on four grant programs—the three training- focused green jobs training programs funded by the Recovery Act (Energy Training Partnership grants, Pathways Out of Poverty grants, and State Energy Sector Partnership and Training grants) as well as the newer Green Jobs Innovation Fund. To report on the characteristics of Labor’s 103 green jobs training grantees, we obtained data from Labor on each training-focused green jobs grant administered by ETA. Specifically, we obtained information on the grantee’s location, organizational type, and whether or not the grantee had sub-grantees. To better understand the type of green jobs training grantees provided, how grantees aligned their training to meet green jobs needs, and what challenges, if any, they faced in implementing their programs, we analyzed data from Labor and interviewed 11 out of the 103 green jobs training grantees between August 2012 and April 2013. We conducted site visits in four states and interviewed grantees in two additional states by phone. We visited grantees in California, Illinois, Minnesota, and Pennsylvania, and interviewed grantees in Connecticut and Louisiana by phone. We selected grantees in these states because these states had a relatively high number of Labor green jobs grant recipients, grantees in these states received GJIF grants, and the states varied in their geographic locations. We selected both Recovery Act- and GJIF- funded green jobs training grantees, but emphasized GJIF-funded grantees since unlike many of the Recovery Act programs, the GJIF program is still active. During each site visit we interviewed Labor’s green jobs training grant officials, training providers, local employers, and, to the extent possible, program participants. Similarly, during our phone calls we interviewed grant officials and in one case employers. During the interviews, we collected information about the types of green jobs training that were funded by Labor’s green jobs training grants and the outcomes of grantees’ programs, including the impact of the training with respect to green job placement, or otherwise. We specifically asked grantees about any challenges they may have encountered as they developed and implemented their program, including whether they experienced challenges with respect to placing participants into green jobs. In addition, we collected information on how local employers were involved in the development of the training programs and the green job opportunities they were able to offer program participants. We cannot generalize our findings beyond the interviews we conducted. To assess the reliability of Labor’s training type and outcome data, we (1) reviewed existing documentation related to the data sources, including Labor’s Office of Inspector General (OIG) reports, (2) electronically tested the data to identify obvious problems with completeness or accuracy, and (3) interviewed knowledgeable agency officials about the data. We determined that the data were sufficiently reliable for limited purposes. For example, we determined that training type data were sufficiently reliable for purposes of reporting out on the industries for which grantees most frequently trained participants. We included information about the extent to which Recovery Act-funded green jobs training grantees collectively reported meeting their enrollment, training completion, and entered employment targets for those grantees for which final data were available as of December 31, 2012. However, based upon the OIG’s findings, we determined that the outcome data were not sufficiently reliable to determine the success of the programs. Finally, based upon the OIG’s findings, we determined that the data on the extent to which grantees entered training-related employment were not reliable enough to report, even compared to targeted levels. To describe Labor’s technical assistance efforts, we reviewed technical assistance guides and material posted to Workforce3One, interviewed Labor officials, and discussed Labor’s technical assistance with selected grantees. To describe and assess Labor’s monitoring efforts, we reviewed its Core Monitoring guide, interviewed Labor officials in Washington, D.C. and in each of ETA’s six regional offices—Atlanta, Boston, Chicago, Dallas, Philadelphia, and San Francisco—and obtained and reviewed copies of Labor’s monitoring reports for green jobs training grantees, including recipients of Energy Training Partnership, Pathways Out of Poverty, and State Energy Sector Partnership and Training grants. Energy Training Partnership (ETP) grants Through the Energy Training Partnership Grants, ETA awarded nearly $100 million to 25 projects. Grantees were to provide training and placement services in the energy efficiency and renewable energy industries to workers impacted by national energy and environmental policy, individuals in need of updated training related to the energy efficiency and renewable energy industries, and unemployed workers. Grantees were required to partner with labor organizations, employers and workforce investment boards. Grant awards ranged from approximately $1.4 to $5 million. In total, Pathways Out of Poverty grantees received approximately $150 million in Recovery Act funds. The grant aimed to help targeted populations find pathways out of poverty through employment in energy efficiency and renewable energy industries. Grants ranged from approximately $2 million to $8 million and were awarded to eight national nonprofit organizations with local affiliates and to 30 local public organizations or private nonprofit organizations. Through SESP, ETA awarded nearly $190 million to state workforce investment boards in partnership with state workforce agencies. The grants were designed to provide training, job placement, and related activities that reflect a comprehensive statewide energy sector strategy including the governor’s overall workforce vision, state energy policies, and training activities that lead to employment in targeted industry sectors. ETA made 34 awards that ranged from approximately $2 to $6 million each. Green Jobs Innovation Fund (GJIF) The Green Jobs Innovation Fund was authorized under the Workforce Investment Act to help workers receive job training in green industry sectors and occupations and access green career pathways. In total, $38 million in grant funds were awarded to six organizations with networks of local affiliates to develop green jobs training programs. These programs were required to incorporate green career pathways either by forging linkages between Registered Apprenticeship and pre-apprenticeship programs or by integrating the delivery of technical and basic skills training through community-based partnerships. Job Corps is a residential job training program for at-risk youth. The Job Corps program aims to teach participants the skills they need to secure a meaningful job, continue their education, and be independent. Job Corps has instituted a number of measures in recent years to “green” its job training programs and facilities. Recovery Act funding was used to incorporate “green” training elements into the automotive, advanced manufacturing, and construction trades at Job Corps centers nationwide and to pilot three new “green” training programs at selected Job Corps centers: Solar Panel Installation, Weatherization, and SmartGrid technology. Targeted topic training grant in which applicants propose training based on the occupational safety and health topics chosen by OSHA. Alternative Energy Industry Hazards and Green Jobs Industry Hazards were included as topics in FY 2009 and FY 2010, respectively. Veterans’ Workforce Investment Program (VWIP) Reported description VWIP supports veterans’ employment and training services to help eligible veterans reintegrate into meaningful employment and to stimulate the development of effective and targeted service delivery systems. In FYs 2009 and 2010, project proposals received priority consideration if they supported “Green Energy Jobs” and proposed clear strategies for training and employment in the renewable energy economy. Reported description ETA awarded approximately $48.8 million in State Labor Market Information Improvement Grants to support the research and analysis of labor market data to assess economic activity in energy efficiency and renewable energy industries and identify occupations within those industries. Grant activities included collecting and disseminating labor market information, enhancing strategies to connect job seekers to green job banks, and helping ensure that workers find employment after completing training. ETA awarded 30 grants of between $763,000 and $4 million. This is a survey-based program, covering 120,000 business establishments, which provides a measure of national and state employment in industries that produce goods or provide services that benefit the environment. This program provides occupational employment and wage information for businesses that produce green goods and services. This is a special survey of business establishments designed to collect data on establishments’ use of green technologies and practices and the occupations of workers who spend more than half of their time involved in green technologies and practices. Green Career Information staff within the Employment Projections program produces career information on green jobs including wages, expected job prospects, what workers do on the job, working conditions, and necessary education, training, and credentials. Recovery Act green jobs grantees who were doing green jobs data collection and training in the states. These Recovery Act funds to O*NET were for the specific purpose of focusing occupational research and data collection on green jobs on an accelerated pace. The Technical Assistance Partnership led by the National Governor’s Association supported Recovery Act-funded green jobs grantees. Green Capacity Building Grants (GCBG) Reported description In total, ETA awarded $5 million in Recovery Act funds to training programs already funded by the Department of Labor to build their capacity to provide training in the energy efficiency and renewable energy industries. ETA awarded 62 of these grants, with awards ranging from $50,000 to $100,000. ETA used $5 million of the $500 million authorized for the Recovery Act green jobs grants for administrative expenses (salaries and expenses). This does not include any funds that were retained for technical assistance for these grants. Administrative expenses were in part used to fund three separate evaluations of Recovery Act green jobs programs: (1) a Labor Market Information evaluation, (2) a green jobs and health care implementation report, and (3) a 5-year impact evaluation. This guidance was funded by the Recovery Act and is a guidance document for R&D workers and employers in the nanotechnology field. Trilateral Roundtable: The Employment Dimension of the Transition to a Green Economy (February 3-4, 2011) The U.S. Department of Labor, Human Resources and Skills Development Canada and the European Commission brought together U.S., Canadian, and European experts representing governments, trade unions, industry, and nongovernmental organizations to discuss the transition to the green economy. Discussions focused on defining and measuring green jobs, establishing effective green jobs partnerships, designing green skills development and training, ensuring green jobs serve as a pathway out of poverty, and examining the quality of green jobs, as well as the sustainability of green jobs investments by governments. Reported description In June 2009, Labor/ETA/OA published a report entitled, “The Greening of Registered Apprenticeship: An Environmental Scan of the Impact of Green Jobs on Registered Apprenticeship and Implications for Workforce Development.” More recently, as part of the 75th Anniversary of the National Apprenticeship Act in 2012, OA put out a call to sponsors across the county to collect Registered Apprenticeship Innovators or Trailblazers. This process identified a number of innovative programs across the country, including several specific examples of apprenticeship programs with a focus on green efforts. Labor officials participated in a technical review of economic research presented in “What Green Growth Means for Workers and Labour Market Policies: An Initial Assessment.” Subsequently the paper appeared as Chapter 4 in the 2012 OECD Employment Outlook. An OSHA website providing green job safety information on specific green jobs, such as green roofing, waste management, wind energy, recycling, weatherization, and geothermal industries. Provided safety information for weatherization jobs in collaboration with Department of Energy, Environmental Protection Agency, and National Institute for Occupational Safety and Health (NIOSH) OSHA worked with EPA in the publication of this guidance which identifies critical indoor environmental quality risks and worker assessment protocols, and provides guidance to address these issues. Through OSHA and The Joint Commission and Joint Commission Resources (JCR) Alliance, JCR developed an article that discusses the importance of adopting sustainable products and practices for cleaning, sanitizing, and disinfecting healthcare facilities. The article also provides requirements for selecting green cleaning products (January 2013). Publishing of a manual: Why Green Is Your Color: A Woman’s Guide to a Sustainable Career. Designed to assist women with job training and career development. A series of teleconferences for workforce practitioners about how to connect women with green jobs. A fact sheet accompanied each teleconference. Reported description In May 2010, the Deputy Director of CFBNP facilitated a partnership between OSHA’s Cincinnati office and East End Community Services in Dayton, OH – a Pathways Out of Poverty sub-grantee seeking a training module on safe handling of asbestos and lead removal as part of a green jobs training program. Interagency working groups (with Energy, Education, and HUD) Labor officials participated in the October 13-14, 2011 ELSAC meeting in Paris, France. One topic discussed at the meeting was the OECD’s green jobs project. Labor staff articulated labor and employment priorities to the U.S. interagency for inclusion in U.S. government positions for Rio+20, including for the U.S. position paper and during negotiations of the Rio outcome document. The two-day Symposium convened experts from 16 Asia-Pacific Economic Cooperation member economies and international organizations to discuss sustainable economic development policies. The event was hosted by the Department of Education, in partnership with Labor. US-Brazil Memorandum of Understanding on Labor Cooperation (March 20-21, 2012) U.S. Secretary of Labor and her counterpart from Brazil signed a Memorandum of Understanding on Labor Cooperation in May 2012. The memorandum highlights cooperation in the area of green jobs. The Women’s Bureau Director led a Labor delegation meeting with officials from Brazil’s Ministry of Environmental Affairs at U.S. EPA about the definition of green jobs, and initiatives in both countries. The October 2012 conference working group meetings considered green jobs in follow-up to the XVII IACML Declaration and Plan of Action adopted by the ministers of labor of the Americas in November 2011. The Plan of Action called for specific follow-up actions related to green jobs including, inter-alia, in-depth exchange of best practices in the region. DOL officials met with 47 women leaders from Sub-Saharan Africa under the African Growth and Opportunity Act’s African Women Entrepreneur Program, sharing best practices, perspectives and strategies to train and employ women in green jobs. Reported description Presentations by Deputy Assistant Secretary of Labor for Occupational Safety and Health to general session, sharing of knowledge, development of informational products and participation in quarterly meetings. OSHA and Labor participated in an interagency Recovery through Retrofit Working Group comprised of over 80 technical staff members from the Departments of Energy, Housing and Urban Development, and Labor; Environmental Protection Agency; ,and USD that drafted standards for workers who will be involved in retrofitting homes to make them more energy efficient. The group met in Denver for a 3 days and a follow-up meeting was held in Washington D.C. This is the Vice President’s initiative. As a part of the working group, OSHA provided technical advice and input in the worker protection aspects of the standards that were drafted. Reported description On December 1, 2010, Secretary of Labor and Assistant Secretary for Employment and Training met with leaders from several national foundations to discuss significant investments in green jobs programs, as well as effective strategies that create employment and advancement opportunities for low- income populations in the green job industry. In 2012, the Director of CFBNP wrote a blog for Fatherhood.gov about an Employment and Training Administration’s grantee RecycleForce that provides green jobs to ex-offenders. Job Train’s “Earth Day Every Day Campaign” Held the week of April 19th-23rd, 2010, the campaign was designed “to raise environmental awareness among students and staff and serve as friendly reminders to be more energy efficient.” Labor staff briefed an official from the Chinese Embassy on Labor green jobs initiatives. Labor staff briefed the liaison on Labor green jobs efforts. Small Business Forum: “Green Jobs: Safety & Health Outlook for Workers and Small Employers” A forum on OSHA’s green jobs efforts and workplace hazards associated with green jobs. Presentation – “What You Need to Know About the Safe Use of Spray Polyurethane Foam (SPF) Briefing on Spray Polyurethane Foam” OSHA Team attended as participating partner and Assistant Secretary spoke. OSHA co-chaired the topic, “OSH in Green Economy” for the conference on behalf of the United States. OSHA led the discussions and wrote the accompanying white paper. OSHA senior staff made presentations at conference on hazards of green jobs. Reported description OSHA personnel made presentations in Atlanta, GA; Los Angeles, CA; Philadelphia, PA; and Detroit, MI. OSHA participated in: The Employment Dimension of the Transition to a Green Economy”. The event brought together experts from government, trade unions, industry, and other stakeholders to exchange information, best practices, and ideas on preparing workers and employers to meet the increasingly complex skill demands of this transition. OSHA made a presentation on Green Jobs hazards. Roundtable has received presentations from CPWR, NIOSH and Department of Commerce on green jobs within the construction industry. First Annual Research Exchange on Advancing Patient, Worker and Environmental Safety and Sustainability in the Health Care Sector. OSHA presentation on focus on green jobs in relation to the healthcare industry. The audience was mainly healthcare workers, employers and researchers. Provides information to employers on practices to help keep workers safe when working with cleaning chemicals, including green cleaning products. The posters are available in English, Chinese, Tagalog and Spanish. The poster includes a section devoted to Green Cleaners. Topic: Making Green Jobs Good Jobs – We All Want To, So What is OSHA Doing to Make it Happen? Discussions at over 30 U.S. locations involving business and community leaders regarding emerging employment opportunities in green job fields. Posters, mobile marketing displays, postcards, flash drives. Reported description ETA designed the Green jobs CoP to serve as a platform for workforce professionals and green job thought leaders to discuss and share promising practices, to create partnerships for green job workforce solutions, and to leverage Recovery Act investments. Specifically, the Green Jobs CoP was designed to provide an interactive platform for providing technical assistance through webinars, discussion boards, blogs and other online resources to workforce professionals, particularly those at the state and workforce investment board levels as well as green jobs grantees (including recipients of upcoming Solicitation for Grant Applications). Reported description The YouthBuild program targets out-of-school youth ages 16 to 24 and provides them with an alternative education pathway to a high school diploma or GED. Most YouthBuild programs have incorporated green building into their construction training. As part of this training, participants learn about environmental issues that affect their communities and how they can provide leadership in this area. Homeless Veterans’ Reintegration Program The purpose of this program is to expedite the reintegration of homeless veterans into the labor force. These grants are intended to address two objectives: to provide services to assist in reintegrating homeless veterans into meaningful employment within the labor force, and to stimulate the development of effective service delivery systems that will address the complex problems facing homeless veterans. The programs’ technical assistance guide refers to collecting data on green jobs participants. Web-based training to help women find and succeed in green jobs. Pilot training projects designed to prepare women to enter high-growth, high- demand green jobs. In addition to the contact named above, Laura Heald, Assistant Director; Amy Buck, Meredith Moore, and David Perkins made significant contributions to all phases of the work. Also contributing to this report were James Bennett, David Chrisinger, Stanley Czerwinski, Beryl Davis, Andrea Dawson, Peter Del Toro, Alexander Galuten, Kathy Leslie, Sheila McCoy, Kim McGatlin, Jean McSween, Rhiannon Patterson, Karla Springer, Vanessa Taylor, and Mark Ward. Grants to State and Local Governments: An Overview of Funding Levels and Selected Challenges. GAO-12-1016. Washington, D.C.: September 25, 2012. Renewable Energy: Federal Agencies Implement Hundreds of Initiatives. GAO-12-260. Washington, D.C.: February 27, 2012. Workforce Investment Act: Innovative Collaborations between Workforce Boards and Employers Helped Meet Local Needs. GAO-12-97. Washington, D.C.: January 19, 2012. Climate Change: Improvements Needed to Clarify National Priorities and Better Align Them with Federal Funding Decisions. GAO-11-317. Washington, D.C.: May 20, 2011. Recovery Act: Energy Efficiency and Conservation Block Grant Recipients Face Challenges Meeting Legislative and Program Goals and Requirements. GAO-11-379. Washington, D.C.: April 7, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability. GAO-10-604. Washington, D.C.: May 26, 2010. Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed. GAO-09-1016. Washington, D.C.: September 23, 2009. Employment and Training Program Grants: Evaluating Impact and Enhancing Monitoring Would Improve Accountability. GAO-08-486. Washington, D.C.: May 7, 2008. Workforce Investment Act: Additional Actions Would Improve the Workforce System. GAO-07-1061T. Washington, D.C.: June 28, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Employers Are Aware of, Using, and Satisfied with One-Stop Services, but More Data Could Help Labor Better Address Employers’ Needs. GAO-05-259. Washington, D.C.: February 18, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Internal Control: Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999.
Labor received $500 million from the Recovery Act to help create, better understand, and provide training for jobs within the energy efficiency and renewable energy industries, commonly referred to as "green jobs." Since 2009, Labor has also "greened" existing programs and funded additional green jobs training grants and other efforts. In light of the amount of funding targeted to green programs within Labor, GAO examined: (1) what is known about the objectives and coordination of Labor's green jobs efforts, (2) what type of green jobs training grantees provided and how selected grantees aligned their training to meet employers' green jobs needs, (3) what is known about program outcomes and what challenges, if any, grantees faced in implementing their programs, and (4) what Labor has done to assist and monitor its green jobs grantees. To conduct this work, GAO reviewed relevant federal laws and regulations; surveyed selected offices within Labor using two questionnaires--one for directly- funded green jobs efforts and one for other efforts; interviewed Labor officials and 11 out of 103 green jobs training grantees; and analyzed relevant Labor documents and data. Of the $595 million identified by Labor as having been appropriated or allocated specifically for green jobs activities since 2009, approximately $501 million went toward efforts with training and support services as their primary objective, with much of that funding provided by the American Recovery and Reinvestment Act of 2009 (Recovery Act). Because the Recovery Act directed federal agencies to spend funds quickly and prudently, Labor implemented a number of high-investment green jobs efforts simultaneously. As a result, in some cases, Recovery Act training programs were initiated prior to a full assessment of the demand for green jobs, which presented challenges for grantees. While Labor's internal agencies initially communicated with each other and with other federal agencies after the Recovery Act was passed, most Recovery Act grants have ended or are winding down. Labor created its green jobs definitional framework to provide local flexibility, and grantees we interviewed broadly interpreted Labor's framework to include any job that could be linked, directly or indirectly, to a beneficial environmental outcome. Labor's training data show most participants were trained in construction or manufacturing. While the findings of our site visits are not generalizable, all grantees we interviewed said they had worked closely with local employers to align their training program with the green skills needs of local employers. Most grantees we interviewed also told us they had incorporated green elements into existing training programs aimed at traditional skills, such as teaching weatherization as part of a carpentry training program. The outcomes of Labor's green jobs training programs remain uncertain, in part because data on final outcomes were not yet available for about 40 percent of grantees, as of the end of 2012. Analysis of grantees with final outcome data shows they collectively reported training slightly more individuals than they had projected, but job placements were at 55 percent of the target. Training-related job placement rates remain unknown because Labor's Office of Inspector General (OIG) found these data unreliable. Grantees we interviewed were generally positive about Labor's green job training programs, but most said they had faced challenges during implementation, including: (1) a lack of reliable green jobs labor market information, (2) insufficient time to meet grant requirements, (3) knowledge gaps surrounding green skills and changing energy policies, and (4) difficulty placing participants into green jobs, primarily due to the overall poor economy. Labor has provided technical assistance and taken steps to monitor green jobs training grantees through on-site monitoring visits and quarterly reviews. During these visits and reviews, Labor officials assessed grantee performance, such as by comparing reported program outcomes, including job placements, to targeted performance levels. However, Labor provided only limited guidance on how to document reported job placements. Labor officials required grantees with lower than projected performance levels to implement corrective action plans. In addition, Labor officials told us they have taken steps to improve the quality of grantee reported data, such as by forming an internal workgroup to identify ways to improve the technical assistance they provide to grantees on reporting performance outcomes. GAO recommends that Labor identify lessons learned from the green jobs training programs to enhance its ability to implement such programs in emerging industries. Labor agreed with our recommendation.
As an integral part of an effective budget execution system, an agency is responsible for determining and maintaining its available fund balance. Treasury also has information about activity in the agency’s accounts, and Treasury’s and the agency’s records must be periodically reconciled to determine the actual amount of funds available. This is analogous to reconciling one’s personal checking account with the monthly bank statement. DOD weaknesses in accounting for its funds include (1) the inability to reconcile its balances to Treasury’s, (2) frequent adjustments of recorded payments from one appropriation to another appropriation account, including to canceled appropriations, (3) problem disbursements—disbursements that are not properly matched to specific obligations recorded in the department’s records, and (4) obligated balances that are incorrect or unsupported. As a result of these weaknesses, auditors have been unable to verify DOD’s Fund Balance With Treasury and its major components—obligated and unobligated balances. This means that DOD does not know with certainty the amount of funding that is available. This information is essential for DOD and the Congress to be able to determine the status of funds and if unobligated balances are available that could be used to reduce current funding requirements or that could be reprogrammed to meet other critical program needs. Although DOD has made some improvements in its accountability over its fund balance with Treasury, the amount of funds available at DOD remains questionable because (1) significant differences between DOD’s and Treasury’s records remain, (2) the reduction in differences between Treasury’s and DOD’s recorded fund balances may be, in part, a result of a change in policy rather than an actual reduction, and (3) items in suspense accounts, which cannot be identified with a specific appropriation account, may not be DOD transactions. DOD made the reduction of differences a high priority in its short-term improvement plans last year. There was a drop in the amount of the unresolved differences from $9.6 billion at September 30, 1998, to $7.3 billion at September 30, 1999. Although some of the differences may be due to the timing of transaction processing at Treasury versus DOD, an aging of the difference suggests that significant reconciliation issues remain. For example, of the $7.3 billion difference, $2.5 billion is 60 days or older. Differences over 60 days old are generally not attributable to timing. At least some of the decrease in the total differences as of September 30, 1999, can be attributed to the practice of some Defense Finance and Accounting Service (DFAS) center staff to routinely adjust their records each month to match those at Treasury without first identifying whether the adjustment is proper. This practice results in fewer differences on the reports but does not necessarily mean that the reconciliation process has actually improved or that the causes of the differences, such as Treasury or DOD errors in recording transactions, have been addressed and resolved. For example, one Army disbursing station recorded $608 million in differences to a suspense account.At year-end, DOD charged the differences to Army’s Operations and Maintenance appropriation, without documentation to support that these transactions should be recorded to this account. This resulted in financial reports to the Congress and OMB that show a reduction in the obligated balance in that account available for disbursement. However, DOD has little assurance that the charge should not have been properly assessed against, for example, some other Army appropriation or even to another entity’s appropriation. Further, at the beginning of the next fiscal year, DOD reversed the Operations and Maintenance charges and returned the amounts to suspense accounts. Finally, DOD records show that an estimated $1.6 billion of transactions held in suspense accounts at the end of fiscal year 1999 have not been properly reported to Treasury and may also affect the fund balance with Treasury amount. Until suspense account transactions are posted to the proper appropriation account, the department will have little assurance that appropriation balances are accurate, and that it has a right to any collections, that adjustments are valid, and that the disbursements do not exceed appropriated amounts. Moreover, the reported amounts in suspense accounts represent the offsetting (netting) of collections and adjustments against disbursements, thus understating the magnitude of the unrecorded amounts in suspense accounts. To illustrate the magnitude of this issue, we previously testifiedthat audit work for fiscal year 1997 found that while the Navy had a net balance of $464 million in suspense accounts recorded in its records, the individual transactions—collections as well as disbursements—totaled about $5.9 billion. DOD frequently adjusts recorded payments to transfer the payment to another appropriation account, including to canceled appropriations. These adjustments raise questions about the reliability of amounts reported as obligated and available for disbursement in specific appropriations. In March 2000, we reportedthat about one of every two dollars in fiscal year 1997 contract payment transactions processed was for adjustments to previously recorded disbursement transactions. Although DOD reported that the number of adjustments has declined, it remains significant. During fiscal year 1999, DFAS data showed that almost one of every three dollars in contract payment transactions was for adjustments to previously recorded payments—$51 billion in adjustments out of $157 billion in transactions. Adjustments were often made to original entries that were recorded years earlier. A number of the adjustments selected during our review were made to canceled accounts. In the National Defense Authorization Act for Fiscal Year 1991, the Congress changed the government’s account closing procedures. The intent of the changes was to impose the discipline of the Antideficiency Actand the bona fide needs ruleto expired appropriations and to ensure that expired appropriations do not remain open on the government’s books indefinitely. Subsequent to the amendment of the account closing law, DOD requested that Treasury reopen hundreds of closed accounts to permit the posting of adjustments. Treasury asked us whether it had authority to correct reporting or accounting errors in closed accounts. In 1993, we determined that Treasury had authority to correct these errors.The decision concluded that Treasury may adjust the records of canceled appropriations to record disbursements that were in fact made before the cancellation. However, Treasury can make these adjustments only if DOD can establish that a disbursement was a liquidation of a valid obligation, recorded or unrecorded, that was properly chargeable against the account before it closed. Adjusting disbursements previously recorded to current accounts by moving those transactions to canceled accounts can increase balances available for obligation in the current accounts. Since the 1991 account closing law was enacted, DOD has requested that Treasury reopen 333 closed accounts, totaling $26 billion. These accounts remained open as of September 30, 1999. By comparison, all other federal agencies combined have requested that Treasury reopen 21 closed accounts, totaling $5 million. According to Treasury’s records, DOD made $576 million in net adjustments to canceled accounts in fiscal year 1999. DOD has indicated that it has controls in place to ensure that adjustments to canceled accounts are proper. Chairman Kasich and Chairman Horn recently asked us to review DOD’s practice of making adjustments to canceled accounts, and our work has just begun. Problem disbursements—disbursements that are not properly matched to specific obligations recorded in the department’s records—continue to impede the department’s efforts to improve its budgetary data. This situation can misstate DOD’s reported obligated balances, undermining this important budgetary control. For example, when disbursements are not matched to specific obligations, an understatement of obligations and an overdisbursement of an account can occur. This situation occurs if the disbursement is for an item for which an obligation has not been recorded or if the amount of the recorded obligation is less than the recorded disbursement. Obligations are also understated in the case of in-transits, in which a disbursement has been made but documentation is insufficient to determine how the transaction should be recorded in the accounting records. The elimination of problem disbursements is one of the department’s highest financial management priorities. DOD has reported progress in resolving problem disbursements. As of September 30, 1999, DOD reported$10.5 billion in problem disbursements, including in- transits, as compared with about $17.3 billion in problem disbursements reported at the end of fiscal year 1998. Of the $10.5 billion, DOD reported that about $1.5 billion were problem unmatched disbursements and negative unliquidated obligations (NULOs)over 180 days old. DOD’s problem disbursement policy requires that obligations be recorded for amounts paid that are unmatched to a recorded obligation or exceed recorded obligated balances after 180 days. However, the policy makes an exception if sufficient funds are not available for obligation. In that case, DOD’s policy permits the department to delay recording an obligation or adjustment until the funds cancel—up to 5 years after expiration of the account. DOD believes that by delaying the recording of the obligation, funds will become available—for example, through de-obligation–thus permitting the obligation to be recorded without raising an Antideficiency Act concern and ensuing investigation. If DOD had recorded this $1.5 billion after the transactions remained unmatched for 180 days, the related account balances would have reflected potential Antideficiency Act violations and required an investigation and report to the Congress if the appropriation is ultimately determined to be overobligated or overspent. An agency may not avoid the requirements of the Antideficiency Act, including its reporting requirements, by failing to record obligations or to investigate potential violations. To ensure sound funds control and compliance with the Antideficiency Act, an agency’s fund control system must record transactions as they occur. We and the DOD IG have previously reportedon this issue and recommended that DOD revise its problem disbursement policies and procedures to ensure that accurate and reliable balances are maintained. Finally, the process and control problems that result in the problem disbursement issues previously discussed also contribute to improper payments by the department. For example, our work continues to identify problems with overpayments and erroneous payments to contractors. For fiscal years 1994 through 1999, according to DFAS records, defense contractors returned over $5.3 billion to the DFAS Columbus Center, including about $670 million during fiscal year 1999, due to contract administration actions and payment processing errors. However, these amounts do not reflect the true magnitude of this problem because many overpayments are returned through billing offsets. We are currently working to estimate the scope of the overpayment problem, including these offsets. In their testing of obligated balances, DOD auditors found evidence of unsupported obligations and poor internal controls over obligations, as illustrated by the following examples. The Army Audit Agency foundthat internal controls over the recording of obligations were not adequate to ensure that reported obligated balances were accurate. In a sample of 60 1999 transactions, the auditors found that 21 could not be supported. For fiscal year 1999, audit resultsshow that the Air Force Working Capital Fund had $211 million of obligations out of approximately $1 billion tested, that is 700 out of 2,526 transactions that were incorrect, inadequately supported, or not supported. In addition, Air Force’s general fund audit continued to identify inaccurate or unsupported obligated balances as of September 30, 1999. Specifically, Air Force auditors identified an estimated $1.3 billion in inaccurate or unsupported obligated balances, a significant improvement over the prior year when an estimated $4 billion in obligated balances were inaccurate or unsupported. In addition to auditors’ reports, the Department of the Navy identified its unliquidated and invalid obligations as a material management control weakness in its fiscal year 1999 annual assurance statement issued pursuant to the Federal Managers’ Financial Integrity Act. For example, the Navy reported that within the Operation and Maintenance-Navy appropriation, some activities were not verifying that only valid obligations were entered into the accounting system. As a result, funding may have been available but not used. In addition, the Navy had more than $1 billion in expired budget authority that was allowed to cancel at the end of fiscal year 1999, including more than $750 million that had been obligated but not disbursed. According to Treasury data, at the end of fiscal year 1999, the department had $3.8 billion in expired budget authority that canceled. Accurate and reliable information would permit the Congress to review DOD year-end unobligated and unexpended balances and identify opportunities for possible funding reductions. For example, as a result of our analysis of unobligated balances in the military personnel appropriation, the House Appropriations Committee recommended a reduction of $96 million in the fiscal year 2001 request for this account. Since the military services’ account data have shown a pattern of not spending all of their appropriated funds, the Committee concluded that the fiscal year 2001 military personnel budget request is overstated and can be reduced. Under federal, state, and international law, DOD faces a major funding requirement associated with environmental cleanup and disposal. These environmental costs result from the production of weapons systems and prior and current operations. Even when current operations are carried out in full compliance with existing environmental regulations, future cleanup costs for certain operations will still result due to the nature of these DOD activities. DOD has taken important steps to implement the federal accounting standardsrequiring recognition and reporting of these liabilities and has made noteworthy progress. For example, DOD’s reported estimated liabilities increased from $34 billion in its fiscal year 1998 financial statements to $80 billion in fiscal year 1999. However, the full magnitude and timing of these costs are not yet known because (1) all potential liabilities were not considered in the reported estimates, (2) estimates were not based on the consistent application of assumptions and methodologies across the services, and (3) support for the basis of reported estimates continues to be inadequate. A reliable estimate of DOD’s environmental liability would be an important factor in determining the cost of its operations and specific programs and for resource planning. To effectively, efficiently, and economically manage DOD’s programs, its managers and oversight officials need reliable cost information for the following key decision points. Evaluatingprograms—Long-term liabilities that affect program costs must be accurately measured and considered in evaluating the status of programs. For example, the liability for disposal activity is part of the overall life-cycle cost of weapon systems and can contribute to the ongoing dialogue on funding comparable weapons. The National Defense Authorization Act for Fiscal Year 1995 required that the Secretary of Defense analyze the environmental costs of major defense acquisitions as part of the life-cycle costs of the programs. However, recent IG audits of several major weapons systems programs, including the Black Hawk helicopter and F-15 aircraft, have found that life-cycle cost estimates did not include costs for demilitarization, disposal, and associated cleanup. In addition, the Senate Committee on Appropriations has required that DOD develop disposal cost estimates for munitions. Makingcurrenteconomicchoices—DOD’s decisions on whether to outsource specific functions require accurate and complete supporting cost data. Yet DOD, as well as other government agencies, has historically been unable to provide actual data on the costs associated with functions to be considered for outsourcing. For example, environmental and disposal costs must be considered in the department’s plans to analyze its more than 2,000 utility systems for privatization. If these costs prove significant to DOD, they should be considered in any cost-benefit analyses developed by the department in deciding to retain or privatize these functions. Resourceplanning—Reliable information on the full extent of the environmental liability that DOD faces under current law and the likely timing of funding requests would enable DOD and the Congress to make informed judgments about DOD’s ability to carry out those requirements. As the Comptroller General recently testifiedbefore the Senate Budget Committee, although we are currently enjoying a period of budget surplus, it does not signal the end of fiscal challenges. Long-term cost pressures from programs such as Social Security and Medicare will consume an ever-larger share of the economy and squeeze the resources available for other commitments and contingencies, such as federal insurance programs and cleanup costs from federal operations known to result in hazardous waste, including defense facilities and weapons systems. Accurate and complete information on the magnitude and timing of DOD’s environmental liability would permit DOD and the Congress to strategically plan for this long-term liability and set realistic priorities among the competing challenges that we will face in the future. Further, quantifying this enormous liability and providing a breakdown of the costs by the approximate time periods the disposal costs are expected to be incurred would add an important context for congressional and other decisionmakers on the timing of resource needs, including those that are more near-term. For example, we estimatedthat approximately $1.6 billion of the $5.6 billion estimate for the disposal of nuclear powered submarines was for submarines that are already decommissioned and awaiting disposal. In summary, the most significant issues faced by the department in determining and verifying its environmental/disposal liability include incomplete estimates, inconsistent methodologies, and inadequate documentation. Incompleteestimates—To date, DOD has focused on what it expects will be its most significant liabilities, those associated with nuclear weapons and training ranges. It has not yet considered the magnitude of costs associated with other weapon systems, conventional munitions, or its ongoing operations, although these costs may also be billions of dollars. For example, the department’s costs to dispose of conventionally powered ships would be at least $2.4 billion, based on applying the Navy’s estimated average cost of $500 per ton of displacement used to estimate disposal costs for its inactive fleet. In addition, we previously estimated that the conventional munitions disposal liability for Army alone could exceed $1 billion. Also, the costs of cleaning up and disposing of assets used in ongoing operations may be significant. Significant environmental and disposal costs are required to be recognized over the life of the related assets to capture the full cost of operations. We are working with DOD to assess whether operations, such as landfills and utilities (including wastewater treatment and power generation facilities), will ultimately have significant environmental costs associated with closure. For example, Edwards Air Force Base officials provided us with a landfill closure cost estimate of approximately $8 million. This estimate excluded post-closure maintenance costs (such as monitoring) which are estimated to exceed $200,000 annually over 30 years. To provide some perspective on the potential scope of these operations, the Army alone reported 65 landfills that, based on the Air Force estimated cost data, could cost nearly $1 billion to close and monitor. Cost estimates should also be refined for changes in cleanup/disposal schedules. For example, DOD reported a liability of approximately $8.9 billion in its fiscal year 1999 financial statements for chemical weapons disposal. Initial estimates to comply with the United Nations- sponsored Chemical Weapons Convention were based on a 2007 completion date. However, we recently reportedthat while 90 percent of the stockpile could be destroyed by the 2007 deadline, schedule slippages associated with the remaining 10 percent are likely to occur because of additional time required to validate, certify, and obtain approval of technologies to dispose of the remaining stockpile of chemical weapons. These schedule slippages will likely result in additional program costs. Historically, schedule delays have been found to increase costs such as labor, emergency preparedness, and program management. Inconsistentmethodologiesandinadequatedocumentation—Each military service independently estimated its liabilities with, in some cases, significantly different results, and the lack of documentation hampered auditors’ ability to verify the estimates. For example, although the Air Force reported twice as many aircraft as the Navy, it has not yet reported environmental and disposal liabilities for its aircraft. The Navy’s financial statements included an initial estimate of $331 million in fiscal year 1999 for its disposal of fixed- and rotary-wing aircraft. In addition, our limited analysis of DOD’s first-time effort to develop complete cleanup cost estimates for training ranges, which we view as an important step forward, showed that the reported amount of $34 billion was comprised primarily of cost estimates for active, inactive, and closed Navy/Marine Corps ranges of approximately $31 billion. The Navy reported this to be a minimum estimate based on assumptions of “low” contamination and cleanup/remediation to “limited public access” levels, for uses such as livestock grazing or wildlife preservation but not for human habitation. Based on these assumptions, the Navy used a cost factor of $10,000 per acre. Although the Army also has significant exposure for training range cleanup liabilities, it reported only $2.4 billion for ranges on formerly used defense sites and closed ranges on active installations. The Army assumed one closed training range per base for the active installations. However, because the Army has not developed a complete range inventory nor recorded any liability for active or inactive ranges, this approach may have significantly understated its liability. To illustrate the potential magnitude of Army training range cleanup, applying the cost factor used by the Navy to estimated range acreage of the Army’s National Training Center at Ft. Irwin, California, would result in a cleanup cost estimate of approximately $4 billion for that installation alone. Further, DOD has had ongoing problems in adequately documenting its reported liability—an important control in ensuring its reliability. Last year, the DOD IG reported that the basis of estimates for significant recorded liabilities—primarily those related to restoration (cleanup) of sites contaminated from prior operations—was not adequately supported, and those problems persist. Military service auditors continue to find that significant portions of the reported restoration liabilities lack adequate support for the basis of cost estimates. For example, the Army Audit Agency found that the Army lacked support for its estimates and attributed it to the fact that recent guidance on documentation requirements was not properly disseminated to project managers and others preparing project cost estimates. DOD and the Congress are looking at numerous options to provide more— and more cost-effective—health care to military personnel upon their retirement. Currently, there are several pilot programs underway to test the feasibility of providing additional health care benefits to retirees over 65 years, including the Medicare Subvention demonstration and the TRICARE Senior Supplement project.Congress is now considering expanding these pilot programs to cover greater numbers of retirees or extending the length of the trial periods. The Congress is also considering expanding coverage of certain benefits, such as for pharmaceuticals, to Medicare eligible retirees. Reliable financial and patient care data would enhance the ability of DOD and the Congress to consider medical care options. DOD estimates that, based on its current benefit programs, the cost of providing future health care benefits for military retirees and their dependents will be $196 billion;however, we have previously testifiedthat this estimate is unreliable because DOD does not have accurate or complete cost and patient care information. DOD developed its estimate using an actuarial model that relies on historical information about the retiree population and the numbers, types, and costs of medical services provided to them. The model also uses economic, actuarial, and other assumptions, such as future interest rates and projected rate increases for medical costs. Improvements to the underlying data or assumptions can significantly change the liability estimate. DOD has made meaningful progress in improving the processes and underlying data on which its liability is based. For example, when better and more complete data about DOD’s population, medical care costs, and outpatient clinic usage were used in the model in fiscal year 1999, the revised estimate was lower by $37.5 billion, or nearly 17 percent, than the fiscal year 1998 estimate. DOD has used its health care model to determine the long-term impacts of some benefit changes; for example, DOD recently calculated the long-term change in the liability of a proposal to provide eligibility for purchased care to retirees over 65. With better underlying data and some refinements to its methodology, DOD’s model could be a valuable tool to both the department and the Congress for estimating the short-term, as well as long-term, budgetary impacts of complex changes to the retiree health benefits program. DOD has been using a similar model to calculate its long-term liability for military retiree pensions for many years, and both DOD and the Congressional Budget Office rely on the model to analyze the impact of changes to the retirement program. As we testified in May 2000, DOD needs to improve the underlying data used by the model. First, DOD needs actual cost data for its military treatment facilities. DOD has been using budget obligation information as a surrogate; however, obligations do not reflect the full cost of providing health care because they do not, for example, include civilian employee retirement benefits that are paid directly out of the Civil Service Retirement and Disability Fund rather than by DOD. Nor do obligations include depreciation costs for medical facilities and equipment. In addition, DOD needs to improve the accessibility and reliability of its patient workload information. The DOD IG has reportedthat medical services could not be validated either because the medical records were not available or outpatient visits were not adequately documented. The DOD IG also reported that outpatient visits are often double counted and that many telephone consultations have been incorrectly counted as visits. An accurate count of patient visits by clinic and type is necessary for DOD to make the proper allocations of medical personnel, supplies, and funding. DOD has been working with the audit community on health care cost and workload data deficiencies and currently has several improvement efforts underway. DOD has been using examples of blatant data errors, such as negative costs for some surgery clinics and obstetric services provided to male patients, to stress to its own staff and to health care contractors the importance of its improvement efforts. We are currently working with a contractor to assess DOD’s retiree health benefits estimation methodology, and preliminary results indicate several areas where the model could be refined. DOD is currently assessing the feasibility and impact of making the following types of refinements. Pharmacy costs for retirees are currently not segregated from those of non-retirees, even though preliminary evidence suggests that retirees use more outpatient pharmacy resources. Also, the future trend rate used by DOD for pharmacy costs is the same as that for general medical costs, even though we previously estimated that DOD pharmacy costs increased 13 percent from 1995 through 1997 while its overall health care costs increased only 2 percent for the same period. In the past, DOD has assumed that numbers and types of clinic visits are adequate measures of outpatient health care usage for purposes of allocating health care costs to retiree and active duty populations; however, additional work may show that diagnosis related information is a better indicator of health resources usage because retirees may have more complicated diseases and therefore require longer and more resource intensive procedures. DOD’s model currently does not calculate separate liabilities for retirees under and over 65 years old. DOD applies the same cost and economic assumptions to the two groups even though Medicare eligible retirees are offered different benefits than retirees under age 65 and therefore, their behavior, needs, and costs could be quite different. DOD relies on various information systems to carry out its important stewardship responsibility over an estimated $1 trillion in physical assets, ranging from multimillion dollar weapon systems to enormous inventories of ammunition, stockpile materials, and other military items. These systems are the primary source of information for (1) maintaining visibility over assets to meet military objectives and readiness goals and (2) financial reporting. However, these systems have material weaknesses that, in addition to hampering financial reporting, impair DOD’s ability to maintain central visibility over its assets and prevent the purchase of assets already on hand. Overall, these weaknesses can seriously diminish the efficiency and economy of the military services’ support operations. In addition, DOD’s systems are not designed to capture the full cost of its assets, a major component in determining the total costs of its programs and activities. If reliable, such costs could be important tools for oversight and performance measurement. Significant weaknesses in accountability and cost information for DOD’s three major categories of assets include the following. Weaponssystems— The reported cost of this equipment in fiscal year 1997—the last year for which such information was reported on DOD’s balance sheet—was more than $600 billion. We have previously testifiedthat many of the military services’ logistics information systems used to track and support weapon systems and support equipment were unable to be relied on. DOD continues to experience problems in accumulating and reporting accurate information on its national defense equipment. For example, because the military services cannot identify all of their assets through a centralized system, each service had to supplement its automated data with manual procedures to collect the information. Items identified as a result of the fiscal year 1999 data call that were not included in the Army’s centralized systems included 56 airplanes, 32 tanks, and 36 Javelin command-launch units. In addition, the military services have historically been unable to maintain information on additions and deletions for most of their national defense assets. While some progress has been made toward improving this data, auditors found that much of it was still unreliable for fiscal year 1999. Reliable information on additions and deletions is an important internal control to ensure accountability over assets. Without integrated accounting, acquisition, and logistics systems to provide accounting controls over asset balances, this control is even more important. For example, property managers should be able to review information on additions to ensure that all assets acquired are reported in logistics systems. If such a control is not in place, DOD cannot have assurance that all items purchased are received and properly recorded. Because of the recognized problems with national defense asset information, the audit community in the past year focused on supporting and reviewing improvement efforts, rather than conducting any significant tests of data and systems. Under the National Defense Authorization Act for Fiscal Year 2000, the DOD Inspector General is required to review national defense asset data submitted to the Congress for fiscal year 1999. Such a review should help determine the success of DOD’s improvement efforts so far, as well as identify those areas requiring further improvement. In addition, DOD has acknowledged that the lack of a cost accounting system is the single largest impediment to controlling and managing weapon systems costs, including costs of acquiring, managing, and disposing of weapons systems. Accurate information on the life-cycle costs of weapon systems would allow DOD officials and the Congress to make more fully informed decisions about which weapons, or how many, to buy. Properly accounting for the revenue associated with the sale of these assets has also been a significant financial management challenge. Since October 1998, we have issued four reports identifying internal control weaknesses in DOD’s foreign military sales program that includes sales of national defense assets and services to eligible foreign countries. Most recently, on May 3, 2000, we reportedthat the Air Force did not have adequate controls over its foreign military sales to ensure that foreign customers were properly charged. Specifically, our analysis of data contained in the Defense Finance and Accounting Service’s Defense Integrated Financial System as of July 1999, indicated that the Air Force might not have charged FMS customer trust fund accounts for $540 million of delivered goods and services. In performing a detailed review of $96.5 million of these transactions, we found that the Air Force was able to reconcile about $20.9 million. However, of the remaining $75.6 million, the Air Force had either failed to charge customer accounts ($5.1 million, 22 transactions); made errors, such as incorrectly estimating delivery prices ($44 million, 11 could not explain differences between the recorded value of delivered goods and services and corresponding value of charges to customer accounts. ($26.5 million or 19 transactions). Inventory– DOD’s inability to account for and control its huge investment in inventories effectively has been an area of major concern for many years. In its fiscal year 1999 financial statements, DOD reported $128 billion in inventory and related property. The sheer volume of DOD’s on-hand inventories impedes the department’s efforts to accumulate and report accurate inventory data. We reportedin our January 1999 high-risk report on defense inventory management that the department needs to avoid burdening its supply system with large inventories not needed to support current operations or war reserves. For example, our analysis of approximately $63 billion of DOD’s reported secondary inventory at September 30, 1999, showed that 58 percent of the reviewed items, or an estimated $36.9 billion, exceeded these requirements. Further, during the fourth quarter of fiscal year 1999, only 2 of the Defense Logistics Agency’s (DLA) 20 distribution depots reported accuracy rates above 90 percent, and overall accuracy was reported at 83 percent, with error rates ranging from 6 percent to 28 percent. DLA’s goal is 95 percent accuracy. The lack of complete visibility over inventories increases the risk that responsible inventory item managers may request funds to obtain additional, unnecessary items that may be on-hand but not reported. Control weaknesses over inventory can lead to inaccurate reported balances, which could affect supply responsiveness and purchase decisions, and result in a loss of accountability. For example, during a December 1999 visit to one Army ammunition depot, we found weak internal controls over self-contained, ready-to-fire, handheld rockets, a sensitive item requiring strict controls and serial number accountability. As detailed in our recently issued report,we and depot personnel identified 835 quantity and location discrepancies associated with 3,272 rocket and launcher units contained in two storage igloos. The depot had more items on hand than shown in its records because of control weaknesses over receipt of items, and, in some cases, the records had location errors. Depot management responded immediately to our findings, and the depot subsequently accounted for and corrected the inventory records of all the rocket and launcher units. Regarding this problem, we identified potentially systemic weaknesses in controls and lack of compliance with federal accounting standards and inventory system requirements and made recommendations to the Army to establish and verify operating procedures to help ensure that systemic weaknesses are corrected. DOD has long-standing problems accumulating and reporting the full costs associated with working capital fund operations that provide goods and services in support of the military services, its primary customers. The foundation for achieving the goals of these business-type funds is accurate cost data, which are critical for management to operate efficiently, measure performance, and maintain national defense readiness. With regard to inventory cost information, federal accounting standards require inventories to be valued based on historical costs or a method that approximates historical costs. However, DOD systems do not capture the information needed to report historical cost. Instead, inventory records and accounting transactions are maintained at a latest acquisition cost or a standard selling price. Inventory levels are also reported to the Congress at latest acquisition cost. Although latest acquisition cost data may be important for budget projection and purchase decisions, this information may not be appropriate for performance measurement. Latest acquisition cost can substantially differ from the cost paid for the item. To illustrate how this occurs, assume a military service had 10 items that cost $10 each, so each item would be valued at $10, or at $100 in total. However, if the service then purchased 1 new item at $25, all 11 items would be valued based upon the latest purchase price of $25, or $275 in total. The former Commander of Air Force Materiel Command testified in October 1999 that such valuation practices distort DOD’s progress toward reducing inventory levels and impact Congressional funding decisions.The Commander stated the following. “Part of the problem was accounting policy. …Each year, inventories of old spare parts were increased in value to reflect their latest acquisition price (the normal commercial practice is to deflate, not inflate, the value of long term assets). Many supply managers who faithfully disposed of unneeded inventory were surprised at the end of the year to see their total inventory value increase. As a result, they were subject to great pressure to further reduce inventory levels. . . .The new spares were needed but funding restrictions prevented purchase of these parts for several years.” Overall, the effect of increasing prices can be demonstrated by noting that the Air Force’s $32.6 billion of inventory at latest acquisition cost is revalued to $18.3 billion to reflect estimated historical costs. Realandpersonalproperty–Audit tests of real property transactions, additions, deletions, and modifications that occurred during fiscal year 1999 indicated that DOD continues to lack the necessary systems and processes to ensure that its real property assets are promptly and properly recorded in its accountability databases. For example, Army auditors reviewed about $408 million in real property transactions recorded during fiscal year 1999 and determined that $113 million of those transactions should have been posted in prior fiscal years. Army auditors also identified $43 million in unrecorded real property transactions.In addition, recent audits by the military service auditors have continued to find that while DOD regulations require periodic physical inventories and inspections—a critical control in safeguarding assets—they are not always performed as required. Air Force auditors reported that real property personnel did not perform required inventories at 34 of 99 installations audited in fiscal year 1999. To illustrate the benefit of physical inventories, while implementing the Navy’s new accountability system, the number of assets recorded in the accountability database at one Marine Corps location alone increased by over 35 percent as result of wall-to-wall inventories. In addition, because DOD does not have the systems and processes in place to reliably accumulate costs, it is unable to account for several significant costs of its operations, including its facilities and equipment. Comprehensive and reliable asset financial information is necessary for determining the full cost of operations and can be useful for anticipating the need for additional budgetary resources. An analysis of reported asset balances and related depreciationcan provide additional information to review specific budget requests. For example, the Navy reported that 85 percent, or approximately $1.2 billion of its $1.4 billion of depreciated equipment reported on its fiscal year 1998 financial statements, was fully depreciated. If Navy’s financial information accurately reflected asset accountability and utilization periods, this information could be used as a factor in analyzing Navy’s funding requests. Specifically, if the Navy’s fiscal year 1998 information were accurate, it would indicate that most of the Navy’s equipment is at or beyond its anticipated utilization period. This type of information could help support a funding request or, absent such a request, could be used to question whether operations would be impaired by the lack of needed capital equipment. Our audit of the U.S. government’s consolidated financial statements for fiscal year 1999 found that the government was unable to support significant portions of the $1.8 trillion reported as the total net cost of government operations. Federal accounting standards require federal agencies to accumulate and report on the full costs of their activities.DOD, which represents $378 billion of the $1.8 trillion, was not able to support its reported net costs. Although we have seen some improvements in DOD’s ability to produce reliable financial information, as noted throughout this testimony and discussed in greater detail in my May 9, 2000, testimony, capturing and accurately reporting the full cost of its programs remains one of the most significant challenges DOD faces. DOD needs reliable systems and processes to appropriately capture the required cost information from the hundreds of millions of transactions it processes each year. To do so, DOD must perform the basic accounting activities of entering these transactions into systems that conform to established systems requirements, properly classifying transactions, analyzing data processed in its systems, and reporting in accordance with requirements. As I will discuss later, this will require properly trained personnel, simplified processes, modern integrated systems supporting operational and accounting needs, and a disciplined approach for accomplishing these steps. Because it does not have the systems and processes in place to reliably accumulate costs, DOD is unable to account for several significant costs of its operations, as discussed in this testimony. As I have highlighted today, the accuracy of the department’s reported operating costs was affected by DOD’s inability to complete the reconciliation of its records with those of the Department of identify the full extent of its environmental and disposal liability, determine its liability associated with post-retirement health care for properly value and capitalize its facilities and equipment, and properly account for and value its inventory. In addition, DOD did not have adequate managerial cost accounting systems in place to collect, process, and report its $378 billion in total reported fiscal year 1999 net operating costs by program area consistent with federal accounting standards.Instead it used budget classifications, such as military construction, procurement, and research and development, to present its cost data. In general, the data DOD reported in its financial statements represented disbursement data for those budgetary accounts, adjusted for estimated asset purchases and accruals. For financial reports other than the financial statements, DOD typically uses obligation data as a substitute for cost. As I stated earlier, DOD budget data are also unreliable. To manage DOD’s programs effectively and efficiently, its managers need reliable cost information. This information is necessary to (1) evaluate programs, such as by measuring actual results of management’s actions against expected savings or determining the effect of long-term liabilities created by current programs, (2) make economic choices, such as whether to outsource specific activities and how to improve efficiency through technology choices, (3) control costs for its weapons systems and business activities funded through the working capital funds, and (4) measure performance. The lack of reliable, cost-based information hampers DOD in each of these areas as illustrated by the following examples. DOD is unable to provide actual data to fully account for the costs associated with functions studied for potential outsourcing under OMB Circular A-76. We reported last year on a long-standing concern over how accurately DOD’s in-house cost estimates used in A-76 competitions reflect actual costs. DOD has acknowledged that its Defense Reform Initiative efforts have been hampered by limited visibility into true ownership costs of its weapons systems. Specifically, the department cited inconsistent methods used by the military services to capture support cost data and failure to include certain costs as limiting the utility of existing weapons system cost data. As noted previously, DOD has also acknowledged that the lack of a cost accounting system is the single largest impediment to controlling and managing weapon systems costs, including costs of acquiring, managing, and disposing of weapon systems. DOD has long-standing problems accumulating and reporting the full costs associated with its working capital fund operations, which provide goods and services in support of the military services. Cost is a key performance indicator to assess the efficiency of working capital fund operations. For example, we recently reportedthat the Air Force’s Air Mobility Command—which operated using a working capital fund—lacked accurate cost information needed to set rates to charge its customers and assess the economy and efficiency of its operations. We separately reported that Air Force depot maintenance officials acknowledged that they lack all the data needed to effectively manage their material costs.As a result, DOD is unable to reliably assess the economy and efficiency of its business-like activities financed with working capital funds. Establishing an integrated financial management system—including both automated and manual processes—will be key to reforming DOD’s financial management operations. DOD has acknowledged that its present system has long-standing inadequacies and does not, for the most part, comply with federal system standards. DOD has set out an integrated financial management system goal. Further, the department is now well- positioned to adapt the lessons learned from addressing the Year 2000 issue and our recently issued survey of the best practices of world-class financial management organizationsand to use the information technology investment criteria included in the Clinger-Cohen Act of 1996. Establishing an integrated system is central to the framework for financial reforms set out by the Congress in the Chief Financial Officers (CFO) Act of 1990 and the Federal Financial Management Improvement Act (FFMIA) of 1996. Specifically, among the requirements of the CFO Act is that each agency CFO develop an integrated agency accounting and financial management system. Further, FFMIA provided a legislative mandate to implement and maintain financial management systems that substantially comply with federal financial management systems requirements, including the requirement that federal agencies establish and maintain a single, integrated financial management system. The department faces a significant challenge in integrating its financial management systems because of its size and complexity and the condition of its current financial management operations. DOD is not only responsible for an estimated $1 trillion in assets and liabilities, but also for providing financial management support to personnel on an estimated 500 bases in 137 countries and territories throughout the world. DOD has also estimated that it makes $24 billion in monthly disbursements, and that in any given fiscal year, the department may have as many as 500 or more active appropriations. Each service operates unique, nonstandard financial processes and systems. In describing the scope of its challenge in this area, DOD recognized that it will not be possible to reverse decades-old problems overnight. DOD submitted its first FinancialManagementImprovementPlanto the Congress on October 26, 1998. We reportedthat DOD’s plan represented a great deal of effort and provided a first-ever vision of the department’s future financial management environment. In developing this overall concept of its envisioned financial management environment, DOD took an important first step in improving its financial management operations. DOD’s 1999 update to its FinancialManagementImprovementPlanset out an integrated financial management system as the long-term solution for establishing effective financial management. As part of its 1999 plan, DOD reported that it relies on an inventory of 168 systems to carry out its financial management responsibilities. This financial management systems inventory includes 98 finance and accounting systems and 70 critical feeder systems—systems owned and operated by functional communities throughout DOD, such as personnel, acquisition, property management, and inventory management. The inclusion of feeder systems in the department’s inventory of financial management systems is a significant landmark because of the importance of the programmatic functions to the department’s ability to carry out not only its financial reporting but also its asset accountability responsibilities. The department has reported that an estimated 80 percent of the data needed for sound financial management comes from these feeder systems. However, DOD has also acknowledged that, overall, its financial management systems do not comply with the FFMIA federal financial management systems requirements. DOD presently lacks the integrated, transaction-driven, double entry accounting systems that are necessary to properly control assets and accumulate costs. As a result, millions of transactions must be keyed and rekeyed into the vast number of systems involved in a given business process. To illustrate the degree of difficulty that DOD faces in managing these complex systems, the following figure shows for one business area—contract and vendor payments—the number of systems involved and their relationship to one another. In addition to the 22 financial systems involved in the contract payment process that are shown in figure 1, DFAS has identified many other critical acquisition systems used in the contract payment process that are not shown on this diagram. To further complicate the processing of these transactions, each transaction must be recorded using a nonstandard, complex line of accounting that accumulates appropriation, budget, and management information for contract payments. Moreover, the line of accounting code structure differs by service and fund type. For example, the following line of accounting is used for the Army’s Operations and Maintenance appropriation. Because DOD’s payment and accounting processes are complex, and generally involve separate functions carried out by separate offices using different systems, the line of accounting must be manually entered multiple times, which compounds the likelihood of errors. An error in any one character in such a line of code can delay payment processing or affect the reliability of data used to support management and budget decisions. In either case, time-consuming research must then be conducted by DOD staff or by contractor personnel to identify and correct the error. Over a period of 3 years, one DOD payment center spent $28.6 million for a contractor to research such errors. The combination of nonintegrated systems, extremely complex coding of transactions, and poor business processes have resulted in billions of dollars of adjustments to correct transactions processed for functions such as inventory and contract payments. As stated previously, during fiscal year 1999, almost one of every three dollars in contract payment transactions was made to adjust a previously recorded transaction. In addition, the DOD IG found that $7.6 trillion of adjustments to DOD’s accounting transactions were required last year to prepare DOD’s financial statements. As we testified last year, DOD has a unique opportunity to capitalize on the valuable lessons it has learned in addressing the Year 2000 issue and apply them to its efforts to reform financial management. The Year 2000 approach is based on managing projects as critical investments and uses a structured five-phase process, including awareness, assessment, renovation, validation, and implementation. Each phase represents a major program activity or segment that includes (1) specific milestones, (2) independent validation and verification of system compliance, and (3) periodic reporting on the status of technology projects. During the department’s Year 2000 effort, DOD followed this structured approach and (1) established interim dates or milestones for each significant aspect of the project, (2) used auditors to provide independent verification and validation of systems compliance, and (3) periodically reported the status of its efforts to OMB, the Congress, and the audit community. To successfully adapt this structured, disciplined process to DOD’s current financial management improvement initiatives, DOD must ensure that the lessons learned in addressing the Year 2000 effort and from our financial management best practices survey are effectively applied. In this regard, two important lessons should be drawn from the Year 2000 experience— the importance of (1) focusing on process improvement instead of systems compliance and (2) strong leadership at the highest levels of the department to ensure the reform effort becomes an entitywide priority. Establishing the right goal is essential for success. Initially, DOD’s Year 2000 focus was on information technology and systems compliance. This process was geared toward ensuring compliance system by system and did not appropriately consider the interrelationship of all systems within a given business process. However, DOD eventually shifted to a core mission and function approach and greatly reduced its Year 2000 risk through a series of risk mitigation measures including 123 major process end-to-end evaluations. Through the Year 2000 experience, DOD has learned that the goal of systems improvement initiatives should be improving end-to-end business processes, not systems compliance. This concept is also consistent with provisions of the Clinger-Cohen Act of 1996 and related system and software engineering best practices, which provide federal agencies with a framework for effectively managing large, complex system modernization efforts. This framework is designed to help agencies establish the information technology management capability and controls necessary to effectively build modernized systems. For example, the act requires agency chief information officers to develop and maintain an integrated system architecture. Such an architecture can guide and constrain information system investments, providing a systematic means to preclude inconsistent system design and development decisions and the resulting suboptimal performance and added cost associated with incompatible systems. The act also requires agencies to establish effective information technology investment management processes whereby (1) alternative solutions are identified, (2) reliable estimates of project costs and benefits are developed, and (3) major projects are structured into a series of smaller increments to ensure that each constitutes a wise investment. The financial management concept of operations included in DOD’s FinancialManagementImprovementPlanshould fit into the overall system architecture for the department developed under the provisions of the Clinger-Cohen Act. In addition, the goal of DOD’s Financial ManagementImprovementPlanshould be to improve DOD’s business processes in order to provide better information to decisionmakers and ensure greater control and accountability over the department’s assets. However, we reported last year,the vision and goals the department established in its FinancialManagementImprovementPlanfell short of achieving basic financial management accountability and control and did not position DOD to adopt financial management best practices in the future. Although the 1999 improvement plan includes more detailed information on the department’s hundreds of improvement initiatives, the fundamental challenges we highlighted last year remain. Specifically, a significant effort will be needed to ensure that future plans address (1) how financial management operations will effectively support not only financial reporting but also asset accountability and control, (2) how financial management ties to budget formulation, (3) how the planned and ongoing improvement initiatives will result in the target financial management environment, and (4) how feeder systems’ data integrity will be improved—an acknowledged major deficiency in the current environment. For example, to effectively support accountability and control, DOD’s plan needs to define each of its business processes and discuss the interrelationships among the functional areas and related systems. To illustrate, the plan should address the entire business process for property from acquisition to disposal and the interrelationships among the functional areas of acquisition, property management, and property accounting. In its 1999 FinancialManagementImprovementPlan, dated September 1999, the department announced its intention to develop a “Y2K like” approach for tracking and reporting the CFO compliance of its financial management systems, including critical feeder systems. However, the department currently has hundreds of individual initiatives aimed at improving financial management, many of which were begun prior to the decision that a Year 2000 approach would be used for financial management reform. These decentralized, individual efforts must now be brought under the disciplined structure envisioned by the Clinger-Cohen Act and used previously during the department’s Year 2000 effort. Doing so will ensure that further investments in these initiatives will be consistent with Clinger-Cohen Act investment criteria and that the department’s financial management reform efforts focus on entire business processes and needed process improvements. Because of the extraordinarily short time frames involved for the Year 2000 effort, the department rarely had the opportunity to evaluate alternatives such as eliminating systems and reengineering related processes. DOD has established a goal of September 30, 2003, for completing its financial management systems improvement effort. This time frame provides a greater opportunity to consider all available alternatives, including reengineering business processes in conjunction with the implementation of new technology, which was envisioned by the Clinger-Cohen Act. Lessons learned from the Year 2000 effort and from our survey of leading financial management organizations also stressed the importance of strong leadership from top leaders. Both these efforts pointed to the critical role of strong leadership in making any goal—such as financial management and systems improvements—an entitywide priority. As we have testified many times before, strong, sustained executive leadership is critical to changing the culture and successfully reforming financial management at DOD. Although it is the responsibility of the DOD Comptroller, under the CFO Act, to establish the mission and vision for the future of DOD financial management, the department has learned through its Year 2000 effort that major initiatives that cut across DOD components must have the leadership of the Secretary and Deputy Secretary of Defense to succeed. In addition, our best practices work has shown that chief executives similarly need to periodically assess investments in major projects in order to prioritize projects and make sound funding decisions. Improving DOD financial management is a managerial, as well as technical, challenge. The personal involvement of the Deputy Secretary played an important role in building entitywide support for Year 2000 initiatives by linking these improvements to the warfighting mission. To energize DOD, the Secretary of Defense directed the DOD leadership to treat Year 2000 as a readiness issue. This turning point ensured that all DOD components understood the need for cooperation to achieve success in preparing for Year 2000 and it galvanized preparedness efforts. Similarly, to gain DOD-wide support for financial management systems initiatives, DOD’s top leadership must link the improvement of financial management to DOD’s mission. For example, DOD stated in its Defense Reform Initiative that improved business practices will eventually provide a major source of funding for weapon system modernization. This can occur through reductions in the cost of performing these activities as well as through efficiencies gained through better information. To ensure that this mission objective is realized will require top leadership involvement to reinforce the relationship between good financial management and improved mission performance. To build this support across the organization, many leading organizations have developed education programs that provide financial managers a better understanding of the business problems and nonfinancial managers an appreciation of the value of financial information to improved decision-making. As discussed below, DOD is taking these first steps in providing training to its financial personnel, and DOD officials have recently stated that their next annual financial management improvement plan will begin to address the need for financial management training for nonfinancial managers. An integral part of financial and information management is building, maintaining, and marshaling the human capital needed to achieve results. While DOD has several initiatives underway directed at improving the competencies and professionalism of its financial management workforce, it has not yet embraced a strategic approach to improving its financial management human capital. Our recently issued guide on the results of our survey of the best practices of recognized world-class financial management organizations shows that a strategic approach to human capital is essential to reaching and maintaining maximum performance. DOD’s 1999 FinancialManagementImprovementPlanrecognized the key role of financial management training in ensuring that the department has a qualified and competent workforce. The DOD Comptroller recently issued a memorandum to the department’s financial management community emphasizing the importance of professional training and certification in helping to ensure that its financial managers are well- qualified professionals. Consistent with this recent emphasis, the department has begun several initiatives aimed at improving the professionalism of its financial management workforce. For example, DFAS contracted to have government financial manager training developed by the Association of Government Accountants provided to several thousand of its employees over the next 5 years. This training is aimed at enhancing participants’ knowledge of financial management and can then be used to prepare for a standardized exam to obtain a professional certification, such as the Certified Government Financial Manager (CGFM)—a designation being encouraged by DOD management. In another initiative, undertaken in conjunction with the American Society of Military Comptrollers, the department reports that it expects to have its own examination-based certification program for a defense financial manager in place in the near future. The department has contracted with the USDA Graduate School—a continuing education institution—to provide financial management training to an estimated 2,000 DOD financial personnel in fiscal year 2000 and thousands more over the next 5 years. The department reports that this training will be directed at helping participants to develop sufficient knowledge so that they can demonstrate competencies in governmentwide accounting and financial management systems requirements as they are applied in the DOD financial management environment. The department is faced with a considerable challenge if it is to improve its financial management human capital to the performance-based level of financial management personnel operating as partners in the management of world-class organizations. While DOD’s financial personnel are now struggling to effectively carry out day-to-day transaction processing, personnel in world-class financial management organizations are providing analysis and insight about the financial implications of program decisions and the impact of those decisions on agency performance goals and objectives. To help agencies better implement performance-based management, we have identified common principles that underlie the human capital strategies and practices of leading private sector organizations.Further, we have issued a human capital self-assessment checklist for agency leaders to use in taking practical steps to improve their human capital practices. In closing, as we have noted throughout this testimony, DOD continues to make incremental improvements to its financial management systems and operations. At the same time, the department has a long way to go to address the remaining problems. Overhauling DOD’s financial systems, processes, and controls and ensuring that personnel throughout the department share the common goal of improving DOD financial management, will require sustained commitment from the highest levels of DOD leadership—a commitment that must extend to the next administration. Mr. Chairman, this concludes my statement. We will be glad to answer any questions you or the other Members of the Task Force may have at this time. (924050)
Pursuant to a congressional request, GAO discussed financial management issues at the Department of Defense (DOD). GAO noted that: (1) to date no major part of DOD has yet been able to pass the test of an independent audit--auditors consistently have issued disclaimers of opinion because of pervasive weaknesses in DOD's financial management systems, operations, and controls; (2) such problems led GAO in 1995 to put DOD financial management on a list of high-risk areas vulnerable to waste, fraud, abuse, and mismanagement, a designation that continued in last year's update; (3) lacking such key controls and information not only hampers the department's ability to produce timely and accurate financial information, but also significantly impairs efforts to improve the economy and efficiency of its operations; (4) unreliable cost and budget information affects DOD's ability to effectively measure performance, reduce costs, and maintain adequate funds control, while ineffective asset accountability and control adversely affect DOD's visibility over weapons systems and inventory; (5) establishing an integrated financial management system--including both automated and manual processes--will be key to reforming DOD's financial management operations; (6) DOD has acknowledged that its present system has long-standing inadequacies and does not, for the most part, comply with the federal system standards; (7) DOD has set out an integrated financial management system goal; and (8) further, the department is now well-positioned to adapt the lessons learned from addressing the year 2000 issue and GAO's recently issued survey of the best practices of world-class financial management organizations and to use the information technology investment criteria included in the Clinger-Cohen Act of 1996.
Afghanistan is a very poor and underdeveloped country that has suffered from instability and war for three decades. The United States and its allies removed the ruling Taliban regime following the September 11, 2001, terrorist attacks on the United States. The new Afghan government inherited a country with limited capacity to govern and a poorly developed infrastructure. About 70 percent of the population is illiterate. According to Transparency International, Afghanistan is the world’s fifth most corrupt country. Its police do not respect human rights, according to the Fund for Peace. The MOI and ANP have a history of corruption, and much of Afghanistan lacks a functioning judicial sector. The United States and other international partners agreed in 2006 to establish a professional Afghan police service committed to the rule of law, shortly after the United States assumed the lead in reforming the MOI and ANP. U.S. goals for the MOI include ensuring that it is competent and efficient, provides strong and effective leadership, and has the organizational structures needed to reform, manage, and sustain the police. U.S. goals for the ANP include ensuring that it is fully constituted, professional, and functional; trained and equipped to uphold the rule of law; and able to effectively meet Afghan security needs, including controlling movement across Afghanistan’s borders. In 2006, the United States, Afghanistan, and other international partners outlined goals for the ANP in the Afghanistan Compact. The stated goals of the compact include the establishment of, by the end of 2010, a professional and functional ANP that can meet the security needs of the country effectively and be increasingly fiscally sustainable. The United States views an effective Afghan police force as critical to extending rule of law in Afghanistan and improving Afghan security. U.S. efforts to help Afghanistan reform the MOI and ANP are directed by Defense through CSTC-A, which is also charged with training the Afghan National Army. State provides policy guidance for CSTC-A’s police program and oversees civilian contractors to implement police training. To date, the United States has provided about $6.2 billion to train and equip the ANP. To achieve U.S. goals, CSTC-A has set objectives for institutional, organizational, and individual reform: Institutional reform is intended to ensure that MOI is run by a professional and adequately trained staff that can manage and sustain a national police force. Organizational reform is aimed at ensuring ANP units have sufficient capacity to maintain domestic order and are responsive to the local population’s needs. Individual reform seeks to ensure that the MOI and ANP consist of trained, competent, and credible individuals dedicated to public service who are accountable and transparent in their actions. The United States works with several international partners in supporting reform, including the following organizations: The European Union Police Mission in Afghanistan (EUPOL) is intended to bring together European national efforts to support police reform in Afghanistan. LOTFA was established by the United Nations Development Program in May 2002 and provides funds for ANP salaries. As of November 2008, LOTFA had received about $653.4 million from 20 international donors, including the United States. UNAMA assists in leading international efforts to rebuild the country. The MOI and ANP have a total authorized force level of about 82,000. The ANP consists of six components. As shown in figure 2, the largest of these is the Afghan Uniformed Police, which serve as local police and perform routine policing duties in Afghanistan’s 365 police districts. These districts are organized into five regional zones (North, East, West, South, and Central) and a sixth zone for the capital city of Kabul. According to State and Defense, the zone commanders report to the Chief of the Afghan Uniformed Police, who reports to the Deputy Minister of Interior for Security. (See apps. II and III for further information on the structure of the MOI and ANP.) Ministry of Interior Headquarters (5,943) Afghan Border Police (17,676) Afghan Uniformed Police (44,801) Afghan police and other security forces are facing increasing attacks by insurgent forces. As shown in figure 3, attacks on Afghan security forces (including the ANP and the Afghan National Army) increased sixfold from October 2003 to October 2008, according to DOD. The number of attacks rose nearly threefold in 1 year, from 97 attacks in October 2007 to 289 in October 2008. The ANP has suffered significant casualties in recent years. According to Defense, at least 3,400 police have been wounded or killed in action since January 2007. In June 2008, a Defense official testified that ANP combat losses during 2007 were roughly three times more than those of the Afghan National Army. Defense data indicate that the ANP suffered between 19 and 101 fatalities per month over a recent 23-month period (see fig. 4)—an average of 56 police killed in action per month. U.S. agencies have helped Afghanistan restructure the MOI and ANP officer corps, modify ANP pay rates, and plan a reorganization of MOI headquarters. CSTC-A has also acted to better coordinate international mentoring of MOI officials. These efforts were intended to help ensure that the MOI and ANP are directed by professional staff that can successfully manage and sustain a national police force in Afghanistan. The officer corps reform program reduced the oversized MOI-ANP officer corps from about 17,800 to about 9,000 personnel, reformed the ANP’s top-heavy rank structure, and increased police pay. In a separate effort, CSTC-A and MOI worked together to develop a plan for increasing MOI’s efficiency by restructuring the ministry and reducing its staff. In addition, CSTC-A and other international partners have adopted a plan to address problems affecting their efforts to build MOI staff capacity through mentoring. U.S.-supported efforts to restructure the MOI and ANP are intended to promote institutional and organizational reform and to help ensure that the MOI and ANP are directed by professional staff that can successfully manage and sustain a national police force in Afghanistan. The programs have been aimed at addressing problems concerning the size and pay structure of the MOI and ANP officer corps, MOI’s organization and capacity, and mentoring of MOI officials. According to U.S. officials, the MOI-ANP officer corps was top heavy. It consisted of nearly 18,000 individuals, including more than 3,000 generals and colonels. ANP personnel were also paid less than Afghan National Army personnel, creating recruitment and retention challenges for the ANP. MOI headquarters suffers from numerous organizational deficiencies, according to U.S. officials. The U.S. Embassy concluded in 2007 that MOI suffered from corruption, limited control over provincial police structures, and low institutional capacity at all levels. CSTC-A reported in 2008 that MOI lacked a clear organizational structure, basic management functions, and an overall strategy for policing. CSTC-A also reported that MOI’s departments did not have clearly defined missions and did not communicate and coordinate with one another. State has reported that MOI lacks a culture of accountability and transparency. According to State police contactors, MOI’s organization has contributed to pervasive violations of its chain of command and to a lack of accountability in ANP districts and provincial commands. MOI’s lack of clearly defined lines of authority and areas of responsibility weakens its ability to combat fraud through effective internal controls. To help address MOI’s weak institutional capacity, the United States and other international partners initiated efforts to mentor individual MOI officials but did not coordinate these efforts. CSTC-A reported in 2008 that international partners provided more than one mentor to some officials— despite a limited number of available mentors—while providing none to others. For example, one MOI commander had four mentors from two different countries at a time when four senior-level MOI officials had none. CSTC-A also found that donors had not always aligned mentor skills with the needs of MOI officials and had not established a single communication chain to share information and coordinate mentor activities. The United States and MOI have restructured and reduced the rank structure of the MOI and ANP officer corps while increasing police pay scales. The rank reform program cut the total number of officer positions from about 17,800 to about 9,000 and reduced the number of the highest ranking officers (generals and colonels) by nearly 85 percent. A board of MOI officials selected officers for retention with help from CSTC-A and U.S. Embassy officials. The rank reform program significantly altered the structure of the officer corps, as shown in figure 5. The reduction in the MOI and ANP officer corps was accompanied by substantial increases in ANP pay, as shown in table 2. The new pay rates are on a par with those of the Afghan National Army. In 2008, CSTC-A, MOI, and international partner officials developed a plan for restructuring MOI headquarters. Their goals in developing the plan included increasing efficiency, streamlining organization, improving coordination, creating conditions to mitigate corruption, and reducing headquarters staff by 25 percent. The plan’s implementation was delayed by political resistance within MOI, according to CSTC-A. MOI was originally to have begun implementing the restructuring plan in September 2008. However, CSTC-A informed us that some MOI departments were concerned that they would lose power and personnel as a result of restructuring. The plan’s implementation was further delayed by the removal of the former Minister of Interior, according to CSTC-A. The plan was approved in late December 2008 by the new Minister of Interior and implementation is scheduled to begin in March 2009. As approved, the restructuring plan provides for a 7 percent reduction in staff, rather than the 25 percent reduction goal originally set by CSTC-A, MOI, and the international partners. CSTC-A and other international partners have agreed on a plan to better coordinate U.S. and international efforts to mentor MOI officials. CSTC-A and other international partners sought to define mentor roles and required skill sets, outline the international partners best suited to support mentoring requirements, establish a personnel management process to facilitate mentor assignments, and identify information and reporting requirements for mentors. The goal of their effort was to reach an agreement to support an integrated mentor program within MOI’s headquarters. In the final plan, which was approved in January 2009, CSTC-A and other international partners agreed to provide an organizational framework to manage the mentoring program, agree on the allocation of mentors according to rationally derived priorities, and optimize the match between mentors’ skill sets and position requirements. CSTC-A has begun retraining ANP through its Focused District Development (FDD) program, which is intended to build professional and fully capable police units. FDD is achieving promising results in most participating districts, according to Defense status reports. In February 2009, Defense assessed 19 percent of units retrained through the FDD program as capable of conducting primary operational missions, 25 percent as capable of conducting primary operational missions with international support, 31 percent as capable of partially conducting primary operational missions with international support, and 25 percent as not yet capable of conducting primary operational missions. However, a shortage of military personnel is constraining CSTC-A’s plans to expand FDD and similar programs into the rest of Afghanistan by the end of 2010. Defense has identified a shortage of about 1,500 military personnel to expand FDD and similar police development programs. CSTC-A has previously obtained military personnel for the FDD program and ANP training by redirecting such personnel from resources intended for its Afghan National Army training program. However, the Afghan army program’s demand for military personnel is likely to grow due to the recent decision by the United States, Afghanistan, and international partners to increase the Afghan army from 80,000 to 134,000 individuals. The goal of the FDD program is to enhance ANP organizational and individual capability by training all uniformed police in a district as a unit. According to State and Defense officials, corruption and local loyalties hampered past efforts to train individuals. Under the previous approach, the effects of individual training were diluted when trainees returned to corrupt police stations staffed by poorly trained personnel with little loyalty to the central government. We reported in 2005 that some returning trainees had been forced by their commanders to give their new equipment to more senior police and to help extort money from truck drivers and travelers. In 2008, State reported that the effects of previous police training had been diluted when newly trained police were reinserted to an unreformed environment. The FDD program differs from previous efforts to train ANP because it focuses on retraining entire districts and not individuals. In implementing the FDD program in a district, CSTC-A assesses the district’s organization, training, facilities, and judicial infrastructure before removing the police unit from the district for 8 weeks of full-time training. During the training program, the unit receives basic training for all untrained recruits, advanced training for recruits with previous training, and management and leadership training for officers. An embedded police mentor team accompanies the unit when it returns to its home district. According to CSTC-A officials, a standard police mentor team includes two civilian police mentors, four military support personnel, and six military security personnel (see fig. 7). While State provides the civilian police mentors, CSTC-A is responsible for providing the 10 military support and security personnel. According to CSTC-A, the police mentor team provides the unit with continued on-the-job training following its return to its home district and assesses the unit’s progress toward becoming capable of independently performing basic law and order operations. The FDD program has shown positive initial results, according to Defense. In February 2009, Defense assessed 19 percent of the units retrained through the FDD program as capable of conducting primary operational missions, 25 percent as capable of conducting primary operational missions with international support, 31 percent as capable of partially conducting primary operational missions with international support, an 25 percent as not yet capable of conducting primary operational missions. In contrast, in April 2008 all of the districts enrolled in FDD were only partially capable of independent action. Police mentor teams are required to send monthly capability assessment forms to CSTC-A as part of CSTC-A’s effort to monitor and assess the FDD program. The assessments rank the units on a variety of competencies, including personnel actions and pay reform, equipment accountability, maintenance, formal training, crime-handling procedures, and use of for Mentor teams also address disciplinary issues and observe units for signs of drug use. According to Defense, in 2007, 29 FDD participants were identified as drug users, removed from the program, and released from the police force. ce. CSTC-A currently lacks the military support and security personnel resources to expand FDD into the rest of Afghanistan. Senior CSTC-A personnel informed us that Defense has not provided CSTC-A with dedicated personnel designated to serve as police mentors. As a result, CSTC-A redirected to the police program personnel that would have been used to mentor Afghan National Army units. CSTC-A staff informed us that they redirected the personnel because the police training program used prior to FDD was not succeeding at a time when the Afghan army training program was making progress. CSTC-A intends to retrain the uniformed police in all districts in Afghanistan using FDD and other similar district-level reform programs. To do so, CSTC-A estimates it would need a total of 399 police mentor teams—365 district teams and 34 provincial level teams. CSTC-A informed us that its preference is to complete FDD using a 3-year planning model that would have 250 police mentor teams fielded by the end of December 2009 and the remaining 149 teams fielded in districts by October 2010. This schedule, however, would not allow Defense to complete FDD training and mentoring in time to meet the Afghanistan Compact’s goal of achieving a fully functional and professional Afghan National Police by the end of 2010. Defense has reported that it would need about 1,500 additional military personnel to expand FDD and similar police development programs. The FDD program faces the likely possibility of increasing competition for these personnel from CSTC-A’s program to fully train the Afghan National Army. In the past, FDD and other ANP training programs have relied on U.S. military personnel that had been intended for use for Afghan army training programs. However, the demand for personnel for use in Afghan army training programs is likely to increase because Afghanistan, the United States, and other international partners have agreed to increase the Afghan army from 80,000 to 134,000 personnel. In November 2008, CSTC-A officials stated they may propose that Defense use U.S. combat units, provincial reconstruction teams, and international forces to help address the shortage of personnel. The officials later informed us that six FDD police mentor teams had been staffed using personnel provided by international forces. However, according to Defense officials in headquarters, Defense has not altered its guidance to CSTC-A for staffing the FDD program. MOI and ANP officers were screened by Defense and State as part of a rank reform program intended to promote institutional and organizational reform, but State did not systematically compile records of background checks conducted as part of the screening effort. The screening effort included testing by CSTC-A of MOI and ANP personnel on police practices. At least 55 percent of the almost 17,800 officers tested passed, according to data provided by CSTC-A. The screening effort also included background checks based on information from State and UNAMA. However, U.S. officials were unable to provide us with detailed information concerning the number of individuals whose backgrounds had been checked and the results of those checks. ANP recruits are endorsed by local elders and officials and, according to CSTC-A, screened by MOI. Members of certain small elite units receive additional screening by U.S. agencies or high-ranking MOI officials. Efforts to screen MOI and ANP personnel are intended to promote institutional and organizational reform. The goals of U.S.-supported screening efforts are to help ensure that (1) MOI is run by a professional and adequately trained staff that can manage and sustain a national police force and (2) ANP units, under MOI control, have the capacity to maintain domestic order while remaining responsive to the needs of the local population. The U.S. Embassy in Kabul reported in 2007 that the effectiveness of the police had been seriously impeded by “corrupt and/or incompetent” MOI and ANP leadership. The State and Defense inspectors general reported in 2006 that then-current ANP screening efforts were ineffective and that verifying the suitability of police candidates in Afghanistan is difficult because of (1) the strength of Afghan ethnic and tribal ties and (2) the lack of reliable personnel and criminal records in Afghanistan. According to CSTC-A, nearly 17,800 MOI and ANP officers took tests on human rights and policing values that were required for consideration in the reformed MOI and ANP officer corps. At least 9,797 (55 percent) of these officers passed. Higher-ranking officers below the rank of general passed the tests at higher rates than lower-ranking officers (see table 3). MOI and ANP officers were also subject to background checks as part of the rank reform process. The background checks were based on information from State and UNAMA. State officials informed us that the Department of State screened officers for rank reform by using its procedures for vetting foreign security personnel in connection with U.S. law. In doing so, it made use of background checks conducted at the State Department in Washington, D.C. State officials in Washington said that the U.S. Embassy in Afghanistan provided them with lists of names and associated biographical information. The officials then used the information to search both a governmentwide database containing sensitive information and various nongovernment databases. UNAMA officials informed us that background checks concerning more than 18,000 names were based in part on information collected locally by UNAMA. According to a UNAMA official, UNAMA found no detailed information for “more than 10,000” names and varying degrees of information about the remaining names. State has not systematically compiled records of the background checks. A U.S. Embassy official informed us that the embassy did not maintain a database of the officers that had been checked. State officials in Washington, D.C., informed us that they had retained copies of the embassy’s requests and their responses but had not systematically compiled the information contained in them. Because they had not systematically compiled their records of the background checks, State officials could not provide us with the number of officers whose backgrounds they had checked or with detailed information concerning the results of the background checks. A U.S. Embassy official provided us with a partial list of embassy screening requests. The list indicates that the embassy had asked State to check the backgrounds of 2,514 unidentified individuals in late 2007. State officials in Washington, D.C., told us they may have screened as many as 4,000 names during the rank reform program. One State official in Washington, D.C., estimated he found derogatory information about fewer than two dozen individuals. The officials in Washington, D.C., said their screening efforts were hampered by the frequent lack of adequate data about an individual’s identity and date of birth. (Many Afghans use a single name, according to U.S. officials, and birth records are often lacking.) The U.S. Embassy provided us with documents indicating that UNAMA found negative information—including assertions of involvement in drug trafficking, corruption, and assaults—on 939 (38 percent) of 2,464 officers it reviewed during late 2007. A UNAMA official informed us that UNAMA had raised concerns about human rights abuses, ties to insurgent groups, corruption, and involvement in drug trafficking in “several hundred” cases. He stated MOI may have selected some officers despite negative UNAMA information because of factional influence, patronage, or possible corruption. ANP enlisted recruits are endorsed in groups by village elders or local government officials and vetted by local police chiefs. According to CSTC- A, the recruits are also screened by MOI’s medical, intelligence, and criminal investigative departments, under MOI procedures established in 2004 and in “full implementation” as of December 2008. Recruits in certain elite units receive additional screening, according to U.S. officials. These units’ authorized personnel levels constitute about 7 percent of all authorized MOI and ANP personnel. The 56 members of the Afghan Counter Narcotics Police’s Special Investigative Unit (SIU) are given periodic polygraph exams, tested for drugs, and screened for human rights violations and drug-related offences, according to U.S. Drug Enforcement Administration (DEA) officials. DEA officials stated that in 2008 DEA repolygraphed 21 SIU members and eliminated 7 based on the results. DEA noted this one-third failure rate is greater than that of SIUs in other countries. The 185 members of the Afghan Counter Narcotics Police’s National Interdiction Unit are initially tested for drug use and screened for human rights violations and drug-related offenses, according to DEA officials. The commanding general of the Afghan National Civil Order Police informed us that he personally interviews all applicants for his force. In our meeting with him in Kabul, the general stated he had dismissed 120 recruits MOI had sent to him due to allegations of drug use and other abuses. U.S.-supported pay system efforts are intended to (1) validate the status of reported MOI and ANP personnel rosters and (2) help ensure that MOI and ANP wages are distributed reliably and fairly. Despite some progress, these efforts face challenges that include limited ANP cooperation and a shortage of commercial banks. Although U.S. contractor personnel have validated the status of almost 47,400 current MOI and ANP personnel, they have been unable to validate the status of almost 29,400 additional personnel—paid in part by U.S. contributions to LOTFA—because of a lack of cooperation from certain ANP commanders. As of January 2009, about 97 percent of reported MOI and ANP personnel had enrolled in a new U.S.-supported electronic payroll system, and 58 percent had enrolled in a new electronic funds transfer system to have salaries deposited directly into their bank accounts. However, nearly 40 percent of personnel may have difficulties using this system because they are not located within 15 miles of a commercial bank. U.S.-supported pay system reform efforts are intended to promote individual reform. Unverified personnel lists and weak pay distribution systems are closely linked to corruption in the ANP, according to U.S. agencies. Corrupt pay practices jeopardize U.S. funds provided by State and Defense to LOTFA in support of MOI and ANP wages. The United States has contributed $230 million to LOTFA as of November 2008, which constitutes more than one-third of the $653 million received by LOTFA. The number of actual MOI and ANP personnel is unclear. While LOTFA data indicate that 78,541 personnel were on MOI and ANP payrolls as of January 12, 2009, CSTC-A informed us that MOI does not have an accurate personnel manning roster or tracking system. The inspectors general of State and Defense stated in 2006 that reports of the number of police were inflated and that ANP salaries were being delivered to police stations based on the number of police listed on the rolls. Further, the U.S. Embassy in Kabul reported in 2007 that police chiefs had inflated personnel rosters by creating “ghost policemen”—allowing chiefs to obtain illegal payments. In 2008, we reported that a 2007 Defense census of ANP in several provinces could not confirm the existence of about 20 percent of uniformed police and more than 10 percent of border police listed on MOI’s payroll records. Weak pay distribution systems have also fostered corruption. The U.S. Embassy reported in 2007 that MOI’s use of “trusted agents” to deliver payrolls allowed district chiefs and other officials to take cuts from patrolmen’s pay. The embassy also noted that problems remain in regularly and routinely providing pay to outlying districts and closing off opportunities for corruption. In 2006, the State and Defense inspectors general concluded that MOI’s “completely broken” pay disbursement system was one cause of the systematic corruption associated with the police. They also found that senior police officials routinely skimmed the salaries of junior police. More examples of problems with ANP pay distribution processes can be found in the weekly reports of U.S. civilian police mentors. During a 2- month period in 2008, the mentors reported a variety of financial irregularities and fraud, including the following: Police in several districts reported that they had not been paid. Some individuals continued to receive officers’ wages after having been demoted to noncommissioned officers. A district commander had lied about the number of ANP personnel in his district to obtain additional funds. He then used some of these funds to hire civilian friends to “help out” at the station. A finance officer reported concerns that district chiefs were forcing their men to pay the chiefs part of their wages. An ANP acting provincial financial chief reported that several district police chiefs had threatened to kill him if he continued to work with the international community on pay matters. Another ANP provincial financial chief was removed for allegedly conspiring to embezzle funds intended for the families of ANP personnel who had been killed. State and MOI have attempted to validate the status of more than 103,000 applicants for police identification cards by positively identifying all police, building a computerized police database, and issuing identification cards for use in paying police salaries. The identification card program began in 2003. State contractor personnel informed us that the validation process is being executed by joint contractor-MOI validation teams that were created because ANP regional zone commanders did not respond to requests to validate the status of applicants in their zones. State informed us in November 2008 that nearly 47,400 MOI and ANP personnel had received identification cards after the validation teams confirmed these applicants had not retired, been killed, or otherwise left the MOI or ANP (see figure 9). MOI and State contractor validation teams also determined that another 26,700 applicants had retired, been killed, or had otherwise left the MOI or ANP, including about 14,200 who had received identification cards before they retired, were killed, or otherwise left the ANP. State informed us that the validation process had been completed in two regional zones in early October 2008. However, according to State, these teams have been unable to validate the extent to which another 29,372 applicants—about 37 percent of the total reported MOI and ANP workforce of 78,541—are active and eligible to receive identification cards. State informed us that three ANP zone commanders are not cooperating with efforts to validate the status of these applicants and that plans to complete the validation process have been put on hold until MOI persuades the commanders to cooperate. According to CSTC-A and State contractor personnel, the identification cards will eventually be used to identify MOI and ANP personnel for pay purposes. We were informed by contractor personnel that each card has a bar code with specific information concerning each individual’s salary group, name, and service number. The card also contains a fingerprint and a digital photograph that can be scanned into a facial recognition program (see fig. 10). Data collected from individuals are processed by MOI personnel and stored in servers located at MOI headquarters (see figs. 11 and 12). According to State contractor personnel, the cards use a variety of optical features to discourage counterfeiters. According to Defense and State, the goal of the new electronic payroll and funds transfer systems is to reduce corruption in pay distribution by establishing fair and reliable pay processes. LOTFA and CSTC-A officials stated the electronic payroll system is intended to replace slow, paper- based processes with an automated system that creates a monthly payroll for police and allows MOI to track individual payments. LOTFA has sponsored training programs to familiarize MOI personnel with the new payroll system (see fig. 13). As shown in figure 14, LOTFA data indicate that 97 percent (76,343) of 78,451 reported MOI and ANP personnel were enrolled in the electronic payroll system as of January 2009. The electronic funds transfer system is intended to help reduce MOI’s use of corruption-prone salary distribution methods by depositing wages directly into the bank accounts of individual MOI and ANP personnel. As of May 2008, LOTFA’s stated goal was to enroll 80 percent of MOI and ANP personnel by September 2008. However, as of January 2009, only 58 percent (about 45,200) of 78,451 reported MOI and ANP personnel were enrolled in the system, according to LOTFA (see fig. 15). CSTC-A and LOTFA attributed the lack of greater enrollment in the electronic funds transfer system to the absence of a nationwide Afghan banking system. According to CSTC-A and LOTFA data, only about 61 percent (47,900) of reported MOI and ANP personnel live and work within 25 kilometers (about 15 miles) of a commercial bank. In November 2008, CSTC-A informed us that the expansion of the electronic funds transfer program was being limited primarily by the impact of security concerns on efforts to open new banks, as well as by the slow installation of automated teller machines, a lack of reliable power at remote locations, and ANP officials who have not yet embraced the program. CSTC-A officials are exploring the possibility of using cell-phone companies in lieu of commercial banks to provide direct access to wages. While Defense and State have worked with Afghanistan and other international partners to initiate and support reform programs that have the potential to help resolve some of the most significant challenges facing the development of a fully professional MOI and ANP, the agencies have not overcome persistent obstacles that will affect the success of the programs. These obstacles include a lack of dedicated personnel for use in creating new mentor teams to support focused development of police districts. Without dedicated personnel resources, the FDD program’s ability to achieve its goals is in jeopardy because it must compete with an expanding Afghan National Army training program. In addition, the Departments of Defense and State have not overcome the resistance of ANP regional commanders who are not cooperating with efforts to validate almost 29,400 names registered to receive ANP identification cards. The United States, Afghanistan, and the international community need a validated database of ANP personnel to help ensure that contributions to LOTFA to pay the wages of Afghan police are not being used to pay nonexistent or inactive personnel. To help ensure that the FDD program can achieve its goals, we recommend that the Secretaries of Defense and State undertake a coordinated effort to provide dedicated personnel to support the creation of additional police mentor teams needed to expand and complete the FDD program. To help ensure that the United States does not fund the salaries of unverified ANP personnel, we recommend that the Secretaries of Defense and State consider provisioning future U.S. contributions to LOFTA to reflect the extent to which U.S. agencies have validated the status of MOI and ANP personnel. The Departments of State, Defense, and Justice provided written comments on a draft of this report (see apps. V, VI, and VII). In addition, Defense provided technical suggestions, which we have incorporated as appropriate. Defense concurred with our recommendation that Defense and State identify and provide dedicated personnel to support the creation of additional police mentor teams needed to expand and complete the Focused District Development program. Defense stated it is considering possible solutions to the shortfall of police mentor teams. The agency added that it plans to deploy about 17,000 additional forces to Afghanistan and to use some of these forces on police mentoring missions. State noted our recommendation and informed us that it is prepared to recruit additional civilian police mentors for new police mentor teams. State concurred with our recommendation that State and Defense consider provisioning future U.S. contributions to LOFTA to reflect the extent to which U.S. agencies have validated the status of MOI and ANP personnel. State added that U.S. contributions to LOTFA should reflect the extent to which MOI and ANP personnel have been validated. Defense did not concur with this recommendation. It asserted that the recommendation would unduly penalize MOI by significantly reducing police pay and that CSTC-A is working with MOI to identify and validate all personnel on the payroll. We disagree with Defense’s comment on our recommendation. Given that the ANP identification card program has been under way for more than 5 years, we believe it is not too soon for Defense to work with State to consider whether to link future U.S. contributions to LOTFA to the number of verified ANP personnel. Our recommendation, if implemented, could help encourage uncooperative ANP commanders to cooperate with U.S.-backed verification efforts and help ensure that only legitimate ANP personnel receive wages subsidized by the United States. We are sending copies of this report to interested congressional committees and to the Departments of Defense, State, and Justice. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix VIII. This report assesses U.S. government efforts to help the government of Afghanistan (1) restructure the Ministry of Interior (MOI) and the Afghan National Police (ANP), (2) retrain selected ANP units, (3) screen MOI and ANP personnel, and (4) enhance MOI and ANP identification and pay systems. To assess the status of U.S. efforts to restructure the MOI and ANP, we reviewed the Department of Defense’s (Defense) Afghan National Campaign Plan, a draft joint mentor coordination plan prepared by Defense’s Combined Security Transition Command-Afghanistan (CSTC-A) and the European Union Police Mission to Afghanistan, and CSTC-A’s MOI Development Plan. We also reviewed briefings from CSTC-A on the MOI restructuring program and the mentoring program. In addition, we reviewed Department of State (State) documents, including situation reports from State contractors in Afghanistan. We supplemented this information by meeting with officials from the Joint Chiefs of Staff (JCS), the Office of the Secretary of Defense (OSD), and State’s bureaus of International Narcotics and Law Enforcement Affairs and South Central Asian Affairs. In Kabul, we met with CSTC-A officials tasked with MOI reform, including officials who were mentoring MOI personnel, as well as MOI officials. We also observed a weekly MOI restructuring task team group that was attended by representatives of CSTC-A and the international community. To assess the status of U.S. efforts to retrain selected ANP units, we reviewed numerous monthly capability assessments for the district police units included in the Focused District Development (FDD) program’s first round. We also reviewed weekly situation reports submitted over several months by State-contracted civilian police mentors in Afghanistan. In addition, we reviewed numerous CSTC-A, OSD, and State briefings that outlined the program’s goals, objectives, implementation plans, and overall status. In addition, we met with agency officials to discuss the progress made and the challenges faced by the FDD program. In Washington, D.C., we met with JCS, OSD, and State officials. We also spoke with officials at the United States Central Command (CENTCOM) in Tampa, Florida. In Kabul, we met with officials from CSTC-A tasked with implementing the FDD program, and visited the CSTC-A Central Training Facility near Kabul and the Jalalabad FDD regional training center. In the Chapahar district, we visited an ANP operating base to see a police unit that had been reinserted into its district after FDD training. To assess the status of U.S. efforts to screen MOI and ANP personnel, we reviewed documents and briefings obtained from State, the U.S. embassy in Afghanistan, OSD, CSTC-A, and the Drug Enforcement Administration (DEA). In addition, we met with U.S. and other officials to discuss the screening processes for MOI and ANP personnel. In Washington, D.C., we met with officials from Defense, State, and DEA. We also spoke with CENTCOM officials located in Tampa, Florida. In Kabul, we met with State officials at the U.S. Embassy. We also met with DEA officials to discuss screening issues pertaining to the Counter Narcotics Police of Afghanist In addition, we spoke with officials at the United Nations Assistance Mission to Afghanistan. an. To assess the status of U.S. efforts to enhance MOI and ANP identification and pay systems, we reviewed data and documents from the United Nations Law and Order Trust Fund for Afghanistan (LOTFA), CSTC-A, and State-contracted mentors in Afghanistan. We also met with State contractor, CSTC-A, LOTFA, and U.S. Embassy personnel in Kabul. To determine the reliability of the data we collected concerning the identification card and electronic pay systems programs, we compared data collected from multiple sources to assess their consistency and obtained written descriptions from LOTFA and State contractor personnel concerning the processes they used to compile and check the data. We concluded that the data were sufficiently reliable for the purposes of our review. Any information on foreign law in this report is not a product of original analysis but was instead derived from interviews and secondary sour ces. Responsible for the enforcement of the rule of law. Assigned to police districts and provincial and regional commands; duties include patrols, crime prevention, traffic duties, and general policing. Provides broad law enforcement capability at international borders and entry points. Specialized police force trained and equipped to counter civil unrest and lawlessness. Leads investigations of national interest, those with international links, and those concerned with organized and white-collar crime. Law enforcement agency charged with reducing narcotics production and distribution in Afghanistan. Leads police and law enforcement efforts to defeat terrorism and insurgency. The following is GAO’s comment on the Department of State’s letter dated March 3, 2009. 1. As noted in our report, we use the term “screening” to include both the testing and background checks that were undertaken to accomplish the goals of the rank reform effort. In addition to the contact named above, Hynek Kalkus (Assistant Director), Pierre Toureille, Christopher Banks, Lucia DeMaio, Mattias Fenton, Cindy Gilbert, Mark Dowling, Lynn Cothern, and Jena Sinkfield ma de key contributions to this report.
The United States has invested more than $6.2 billion in the Afghan Ministry of Interior (MOI) and Afghan National Police (ANP). The Department of Defense's (Defense) Combined Security Transition Command-Afghanistan (CSTC-A), with the Department of State (State), leads U.S. efforts to enhance MOI and ANP organizational structures, leadership abilities, and pay systems. This report assesses the status of U.S. efforts to help Afghanistan (1) restructure MOI and ANP, (2) retrain ANP units, (3) screen MOI and ANP personnel, and (4) enhance MOI and ANP pay systems. GAO reviewed Defense, State, and United Nations (UN) data and met with officials in the United States and Afghanistan. U.S. agencies and Afghanistan have achieved their goals of restructuring and reducing a top-heavy and oversized MOI and ANP officer corps, modifying police wages, and planning a reorganization of MOI headquarters. These efforts are intended to help ensure that the MOI and ANP are directed by professional staff that can manage a national police force. U.S. agencies and MOI cut the officer corps from about 17,800 to about 9,000, reduced the percentage of high-ranking officers, and increased pay for all ranks. MOI is scheduled to implement a U.S.-supported headquarters reorganization. CSTC-A has begun retraining ANP units through its Focused District Development (FDD) program, which is intended to address district-level corruption that impeded previous efforts to retrain individual police. FDD is achieving promising results, according to Defense status reports. In February 2009, Defense assessed 19 percent of FDD-retrained units as capable of conducting missions, 25 percent as capable of doing so with outside support, 31 percent as capable of partially doing so with outside support, and 25 percent as not capable. However, a lack of military personnel is constraining CSTC-A's plans to expand FDD and similar programs into the rest of Afghanistan by the end of 2010. Defense has identified a shortage of about 1,500 military personnel needed to expand FDD and similar police development programs. CSTC-A has previously obtained military personnel for ANP training by redirecting personnel from its Afghan army training program. However, the army program's demand for personnel is likely to increase as the Afghan army grows from 80,000 to 134,000 personnel. MOI and ANP officers were screened by Defense and State, but the full extent of the screening is unclear because State did not systematically compile records of its efforts. The screening effort was intended to improve the professionalism and integrity of the officer corps through testing by CSTC-A and background checks by State. At least 9,797 (55 percent) of the nearly 17,800 officers who took the tests passed, according to CSTC-A. State was unable to provide us with statistics concerning the results of background checks because it did not systematically compile its records. U.S.-supported pay system efforts are intended to validate MOI and ANP personnel rosters and ensure that wages are distributed reliably. Despite progress, these efforts face challenges that include limited ANP cooperation and a shortage of banks. U.S. contractors have validated almost 47,400 MOI and ANP personnel but have been unable to validate almost 29,400 personnel--who were paid in part by $230 million in U.S. contributions to a UN trust fund--because of a lack of cooperation from some ANP commanders. As of January 2009, 97 percent of all reported MOI and ANP personnel had enrolled in an electronic payroll system and 58 percent had enrolled to have their salaries deposited directly into their bank accounts. However, growth of the direct deposit system may be constrained because almost 40 percent of ANP personnel lack ready access to banks.
To date, over 800,000 units in approximately 8,500 multifamily housing projects have been financed with mortgages insured by FHA and supported by project-based Section 8 housing assistance payments contracts. Many of these contracts set rents at amounts higher than those of the local market. As these housing subsidy contracts expire, Congress has mandated that the rents on these privately owned multifamily properties be lowered to market levels. For those properties identified by HUD as having above-market rents, Congress created the mark-to-market program in 1997 to reduce rents to market levels and restructure existing mortgage debt to levels supportable by these rents. The goals of the mark-to-market program include preserving the affordability and the availability of low-income rental housing, while reducing the long-term costs of Section 8 project-based assistance. The restructuring generally involves resetting rents to market levels and reducing mortgage debt, if necessary, to permit a positive cash flow. To facilitate the restructurings, Congress provided OMHAR with certain tools, such as the ability to reduce an owner’s mortgage payments by creating a new first mortgage and, where necessary, deferring some of the debt to a second mortgage, which is only required to be repaid if sufficient cash flow is available. The mark-to-market process begins when a property’s existing Section 8 project contract is nearing expiration and the owner decides to remain in the program. Before a new Section 8 contract is awarded, these property owners are required to submit to HUD a market study that contains information on market rents for comparable properties located within the subject property’s geographic area. Local HUD field offices review these market studies and, where studies show a property owner’s rents are not above market, have the option and authority to award the owner with a new Section 8 contract. HUD field offices forward cases to OMHAR when a market study submitted by an owner shows their rents are above market. OMHAR, in turn, provides these cases to contractors, known as participating administrative entities (PAE), who also conduct market studies, carry out the analysis necessary for restructurings, and prepare restructuring plans and documentation. Under the mark-to-market program, properties whose rents are above market levels undergo one of two types of restructuring. Mortgage restructurings generally involve resetting rents to market levels and reducing mortgage debt to permit an acceptable, positive cash flow. For this type of restructuring, the PAE develops restructuring plans based on a reduction in rents and mortgage debt and submits the plans to OMHAR for review and approval. Before the restructuring plans can be implemented, owners are required to enter into a new 20-year Section 8 contract and to sign an affordability and use agreement promising to maintain restrictions aimed at preserving the designated units as affordable housing for at least 30 years—10 years beyond the Section 8 contract period. Property owners must agree to contribute 20 percent of the total cost of rehabilitation needs of the property. The remaining rehabilitation costs are included in the second mortgage that is created during the restructuring process. Rent restructuring also involves the PAEs developing restructuring plans that must be approved by OMHAR. However, these plans are based only on a reduction in the rents, not the mortgage debt. Rent restructurings are only permitted for properties that can demonstrate the ability to have acceptable, positive cash flow with a rent reduction but without a mortgage restructuring. There are no affordability and use restrictions on properties that receive rent restructuring, and the Section 8 contracts are usually renewed for 5 years. Approximately 211 properties have not completed the mortgage restructuring process, even though, as a result of having their rents reduced to market level, OMHAR determined that such a restructuring is necessary for the properties to have acceptable, positive cash flows.OMHAR places properties that it believes should have had a mortgage restructuring on the watch list because it believes such properties are at risk of developing physical and financial problems stemming from insufficient cash flow. These property owners receive a 1-year renewal watch-list contract. After OMHAR places these properties on its watch list, it becomes the responsibility of HUD field offices to monitor them as part of their asset management duties. Guidance issued by HUD in September 2001 requires HUD field offices to monitor watch properties for signs of physical, financial, and management deterioration. Based on the guidance, field office staff should review available data on the properties’ physical and financial condition, and conduct periodic management reviews and site visits for properties showing signs of impending default. If a field office observes a decline in the properties’ physical or financial condition, it can refer the property to HUD’s Departmental Enforcement Center (DEC) for analysis and a potential corrective action plan. In cases where owners fail to comply, DEC can resort to enforcement actions, such as the issuance of civil money penalties, taking debarment and suspension actions, and recommending foreclosure. HUD’s REAC conducts physical inspections of all HUD multifamily properties, including watch-list properties. One of the key monitoring responsibilities of HUD project managers is to monitor the results of these physical inspections, which are captured in HUD’s Guidance for Oversight of Multifamily Physical Inspections. HUD’s monitoring guidelines direct project managers to pay special attention to properties receiving a substandard or severe physical inspection score of 59 or below, including follow-up with the property owners to ensure that all exigent deficiencies (health and safety issues) are corrected in 3 business days. Each year HUD requires property owners to submit audited financial statements for all multifamily housing properties it insures and/or subsidizes. Using its Financial Assessment Subsystem (FASS), HUD develops a score that indicates the level of financial health associated with such properties. This financial score represents a single aggregate financial measure that synthesizes data from five different financial ratios. For example, the debt service coverage ratio compares a property’s net operating income to its mortgage amount and demonstrates whether the property has sufficient cash flow to meet its debt service obligations. If a property’s income is equal to its debt service, the debt service coverage ratio is 1.0. Generally, HUD expects a property’s income to be at least 120 percent of its debt, or have a debt service coverage ratio of 1.2 or higher. OMHAR places properties on the watch list when a property’s rents are reduced to market level but its mortgage is not restructured. As of April 15, 2002, OMHAR had placed 211 properties on the watch list for one of three reasons. OMHAR assigned the majority of these properties to the watch list because the property owners elected not to enter into or complete the mortgage restructuring process, even though OMHAR had determined that the mortgage needed to be restructured. In addition, OMHAR placed some properties on the watch list because it decided that restructuring the mortgage was not financially feasible under OMHAR’s guidelines. Finally, OMHAR placed a few properties on the watch list because the owner’s actions, such as financial or managerial improprieties, resulted in the owner’s ineligibility for a mortgage restructuring. Figure 1 below shows the percentage and number of properties placed on the watch list by reason. According to HUD data, most owners on the watch list refused to enter into or complete the restructuring process. OMHAR considers these owners “uncooperative.” According to OMHAR, uncooperative owners include those who (1) fail at any point during the process to supply information needed to complete the restructuring process, (2) fail to respond in a timely manner to the PAE’s proposed restructuring plan, (3) fail to address critical repair needs in a timely manner, and (4) fail to close on a viable transaction. OMHAR officials told us that property owners did not restructure their mortgages for several different reasons. OMHAR and PAEs, who work closely with owners while developing restructuring plans, agreed that the most common reasons owners refuse to enter into or complete the restructuring process are (1) the required out-of-pocket funds for rehabilitation and repairs and (2) the owners’ perception that properties could operate sufficiently at the reduced rents. Under mark-to-market program regulations, each owner who is participating in a mortgage restructuring is required to contribute 20 percent of the total cost of rehabilitation of the property. For example, the owner of one of the properties we visited for a case study did not complete the restructuring because he refused to pay approximately $26,000 for contributions toward rehabilitation and escrow costs. OMHAR determined that with restructuring, the property would have an acceptable cash flow. However, because of the owner’s refusal to complete the restructuring process, OMHAR believes the property does not have sufficient income to cover its debt and operating expenses. See appendix V for more information on this case study property. Some owners chose not to have their mortgages restructured because, despite OMHAR’s determination that a restructuring was necessary to provide for an acceptable cash flow, the owners felt that they could successfully operate the property at reduced cash flow. According to one housing industry representative, some owners disagree with OMHAR’s conclusion that their property’s mortgage needed to be restructured. The housing industry official stated that she believes that some owners have valid arguments against OMHAR’s conclusion and that their properties could operate successfully at reduced cash flow. According to OMHAR, there are some properties on the watch list that may operate successfully at reduced rents until they have a major capital repair need that will affect their cash flow because they do not have sufficient reserves to cover the repair. Another reason some property owners have not restructured their mortgages is their reluctance to enter into a 30-year affordability and use agreement, as required in the act. This agreement requires that a certain percentage of units must be leased to families whose incomes do not exceed a certain percentage of the area median income. According to a PAE, some owners are concerned about entering into a 30-year affordability and use agreement when the contract HUD has established for a mortgage restructuring is only for a 20-year period, which leaves 10 years when the owner will have to provide affordable housing without the guarantee of a Section 8 contract. Also, an industry official representing owners noted that some owners are also concerned about agreeing to a 30- year affordability and use agreement when their property is already 20 to 30 years old and they are uncertain about the viability of their property in 30 years. According to HUD’s database, most properties on the watch list are at least 20 years old. According to OMHAR, some property owners have not restructured their mortgages because they are planning to opt out of the project-based Section 8 program. When an owner chooses to opt out of the project-based Section 8 program, eligible tenants are offered assistance in the form of tenant-based vouchers, which they may use at the same property. According to OMHAR, some owners may be able to obtain higher subsidies through tenant-based assistance because the market rents established for tenant-based assistance may be higher than the market rents established for the property through the mark-to-market process. Secondly, OMHAR also places properties on the watch list if it determines that because of economic conditions in the market and/or a property’s financial and/or physical condition, restructuring the property is not financially viable. According to OMHAR, the majority of the 31 watch-list properties that were determined to be financially nonviable were nonviable due to a combination of the economic conditions in the market and the property’s financial and/or physical condition. Two of the properties we visited for case studies were declared by OMHAR to be financially not viable for restructuring. For example, one property we visited in Washington, D.C., required over $4 million in rehabilitation and had an outstanding mortgage balance of about $1.3 million. OMHAR determined that, given the market rents in the area, the property could not generate enough income to finance the mortgage and rehabilitation costs. The other property that we visited that OMHAR declared financially nonviable was located in Rhode Island. OMHAR determined that restructuring was not viable for this property because it had an unpaid mortgage balance of $421,280 but appraised at only $217,000 and required $200,000 in rehabilitation costs. See appendixes II and III for more information on these properties. Third, OMHAR places properties on the watch list because of an owner’s actions. Under the act, owners who engage in financial or managerial improprieties may be declared ineligible for mortgage restructuring. Properties can be removed from the watch list for several reasons. Thus far, 70 properties that were at one time on the watch list have been removed. Under watch-list monitoring guidelines, properties can remain on the watch list for 3 or more years. According to OMHAR, a property can be removed if, after 3 years on the watch list, it has maintained its physical and financial condition. No properties have been removed from the watch list for this reason. In addition, properties can be removed from the watch list if the owners prepay their FHA-insured mortgage. Property owners who prepay their FHA-insured mortgages may continue to have Section 8 contracts but no longer represent a financial risk to the FHA insurance fund. Thirty-two properties have been removed from the watch list for this reason. Properties are also removed from the watch list if the owner decides to reenter the mortgage restructuring process, as has occurred with 32 properties. In addition, properties are removed from the watch list if the owner opts out of the Section 8 program, as has occurred with six properties. According to HUD’s latest physical inspection results, the majority of watch-list properties are in satisfactory physical condition. HUD data show that 182 of the 211 watch-list properties, or 87 percent, scored 60 or higher on their most recent physical inspection—which HUD considers to be satisfactory. However, 75 of these properties have not been inspected since being placed on the watch list. HUD uses the same criteria for determining the timing of inspections for watch-list properties as it does for other multifamily properties. Under HUD’s guidelines, properties that receive a physical inspection score between 90 and 100 are to be reinspected in 3 years, properties that receive a score between 80 and 89 are to be reinspected in 2 years, and properties that receive a score of less than 80 are to be reinspected in 1 year. HUD data indicate that 131 of the watch-list properties received scores between 80 and 100 on their most recent inspection and therefore are not required to be reinspected for 2 or 3 years. According to HUD and industry officials, watch-list properties are at risk of developing physical problems because, in response to reduced cash flow, some owners are likely to cut back on routine maintenance, major improvements, and contributions to replacement reserves. Twenty-six of the watch-list properties, or 12 percent, received a substandard physical inspection score between 31 and 59 on their most recent inspection. Properties with scores in this range may exhibit a variety of significant problems. For example, a property we reviewed in Florida for our case studies that received a score of 33 had a wide range of deficiencies, including health and safety issues, such as inoperable smoke detectors and missing or broken electrical outlets. See appendix I for more information on this case study. Three watch-list properties, or about 1 percent of the total, received a physical inspection score of 30 or less—which HUD considers severely distressed. Severely distressed properties are likely to have major problems. One of our case study properties received a score of 21. Its roof and boilers required replacement, and there was water damage throughout the building. (See app. II for more information on this case study). Figure 2 shows the watch-list inventory sorted by the percentage of properties whose physical inspection score fell into each category. Based on information from FASS, which contains information from property owners’ audited annual financial statements, 131 of the 211 watch-list properties, or 62 percent, show signs of potential financial risk. FASS generates a score that places properties in one of three risk categories—acceptable risk, cautionary risk, and high risk. FASS indicates that the overall financial condition for 95 watch list properties is “high risk,” while another 36 properties are “cautionary.” Figure 3 shows the percentage of watch-list properties in each of HUD’s three risk categories based on their 2001 FASS scores. In generating a FASS score, FASS uses a formula that analyzes such factors as whether a property has sufficient cash to meet its mortgage payments, operating expenses, vacancy rates, and contributions to the replacement reserve account. According to HUD officials, the overall FASS score is meant to provide HUD with information on the financial condition of its aggregate portfolio and highlight properties that warrant further investigation by spotting potential financial problems before they occur. These officials also told us that these data should be used in conjunction with other available information to assess a property’s overall financial condition. One of the indicators that are included in the FASS score is the debt service coverage ratio, which shows how much revenue is available to pay mortgage payments. We found that 97 of the 211 watch-list properties, or about 46 percent, had debt service coverage ratios below 1.0. Specifically, 73 of the 95 high-risk properties, 18 of the 36 cautionary properties, and 6 of the 66 acceptable properties had debt service coverage ratios below 1.0. This suggests that even some properties in the acceptable and cautionary risk categories may experience difficulty meeting their mortgage payments. HUD recently developed monitoring procedures specifically for watch-list properties, but it is too early to tell whether the guidance will be effective in monitoring the watch-list properties. HUD’s implementation of these procedures has been slow and inconsistent at the field offices we visited. Believing that mark-to-market rent reductions in the absence of a corresponding mortgage restructuring may place the physical and financial condition of watch-list properties at risk, HUD developed oversight procedures specifically designed to protect the long-term viability of watch-list properties. The procedures, introduced in September 2001, include HUD requirements that all properties be assigned to experienced project managers who are responsible for documenting properties’ current condition as well as performing ongoing monitoring activities. all property owners on the watch list are to submit monthly accounting reports for a minimum of 1 year after rent reduction. These reports are to itemize receipts and disbursements and are intended to aid project managers’ analysis of financial performance. HUD project managers review properties’ monthly accounting reports, prepare and submit quarterly status reports on all properties to HUD and OMHAR directors, and handle and resolve compliance and performance problems if revealed by the FASS review. In addition to the above requirements, the new watch-list guidelines provide monitoring guidance to ensure field offices closely track any changes in a property’s condition. In particular, the guidance states that the project managers should pay close attention to the physical and financial condition of the watch-list properties that are assigned to them. In terms of the physical condition, the HUD guidance states that the project manager should follow-up with the owner to ensure that deficiencies are corrected. HUD also suggests that its representatives make site visits to monitor repairs, and consider requesting interim physical inspections where there are indications of diminished property viability. The new guidelines also specify that watch-list properties are subject to existing asset management, project servicing and physical inspection guidelines applicable to all multifamily properties. These include the detailed REAC physical inspections and annual financial reporting requirements. Other monitoring suggestions applicable to all multifamily properties include on-site management reviews when there are indicators of potential problems, and informal drive-by observations. Properties can also be referred to DEC for further actions when there are signs of potential or existing diminished property viability. DEC works with owners to correct deficiencies and can resort to actions such as levying civil money penalties, taking debarment and suspension actions, and recommending foreclosure. HUD has only recently—in July 2002—developed the format that field offices should use in developing quarterly reports for the watch-list properties—10 months after saying it would do so “shortly” in its September 2001 guidance. As a result, until now, field offices have not been able to implement this aspect of the guidance. Furthermore, during our review, we visited selected HUD field offices and found differing levels of compliance with other aspects of the watch-list monitoring guidance, specifically the experienced project manager and monthly accounting report requirements. In the six field offices we visited to review sample watch-list cases, implementation—as characterized by HUD officials— ranged from none to exceeding the minimum guidelines. At one office, officials had not assigned our sample case to an experienced project manager and were not collecting monthly accounting reports at the time of our visit in April 2002; officials at another office did not introduce the guidelines until our visit in October 2001; three offices had assigned watch-list properties to experienced project managers and were requiring monthly accounting reports; and one office was assigning monitoring responsibility to a single experienced project manager, was receiving monthly accounting reports, and had gone beyond the minimal requirements by creating a computer spreadsheet to facilitate trend analysis of the monthly financial data. We provided a draft copy of this report to HUD for its review and comment. In its written comments, HUD’s Assistant Secretary for Housing- Federal Housing Commissioner stated that we provided valuable advice and guidance during our review, but provided some clarifying comments and technical modifications, which we have incorporated into this report as appropriate. In addition, the Assistant Secretary stated that, with respect to our assessment that the implementation of the watch-list procedures was slow and inconsistent at the offices we visited, additional guidance has been sent to HUD Multifamily HUB directors that should lead to more consistent oversight. The Assistant Secretary also stated that other measures are being taken to monitor the watch-list properties, including an analysis of these properties’ financial statements by REAC financial specialists and sharing the results of the analyses with project managers in the field. The full text of HUD’s comments can be found in appendix VIII. We will send copies of this report to the Secretary of Housing and Urban Development. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7631. Key contributors to this report are listed in appendix IX. Williams Apartments is a 37-unit Florida complex, constructed in 1969 and has been owned by the same sole proprietor since that date. The property is located in Titusville, a small town of approximately 30,000 people. There are 5 residence buildings composed of 37 units (18 2-bedroom and 19 3-bedroom). All units receive Section 8 subsidies. The managing agent for the complex is currently the owner. He also managed the property from 1969 until 1995 and two independent management companies managed from 1995 until 2000. The property’s rents were reduced in March 2001. Based on full occupancy, the total annual rental income was reduced approximately 36 percent. The monthly rents on the two-bedroom units were reduced from $568 to $345, and the three-bedroom units were reduced from $656 to $440 (see fig. 4 below). The property was placed on the watch list in March 2001. The owner, who is of advanced age, refused to sign the restructuring agreement after it was completed in November 2000, and he would not provide us specific reasons for his refusal. The owner’s cash contributions were set at $25,594, including approximately $21,000 for rehabilitation escrow. The restructured mortgage would mature in 21 years and 8 months and would provide a 1.2 debt service coverage ratio, making the property financially viable to operate. The existing mortgage had an unpaid balance of $205,657 as of October 2001 and will mature in October 2010. The Williams Apartments received a score of 33 (based on a 100-point system) on its most recent Real Estate Assessment Center (REAC) physical inspection, which was conducted in December 2001. The inspection report cited a wide range of life threatening deficiencies, including missing/inoperable electrical cover plates and blocked emergency exits. The property had received a similar low score of 36 on its prior physical inspection. HUD considers these scores to indicate that the property was in substandard condition. In February 2001, a HUD manager stated that as a result of the owner allowing the physical condition of the property to deteriorate, the complex had been “on and off” HUD’s highest risk list for the last 15 years. A contractor for HUD’s Departmental Enforcement Center (DEC), where the property was referred because of physical and financial problems, stated that since 1994, the property has consistently received below average ratings on maintenance policies and procedures. The contractor visited the property in April 2001 and reported many physical problems that were cited in previous inspections. In March 2002, the HUD senior project manager received a list of complaints signed by 20 tenants. They complained of ceilings falling down, the absence of hot water, electrical outages, and plumbing problems. Deficiencies identified by the November 2000 and December 2001 inspections have not been addressed, and the owner has not submitted the required plan of corrective actions to HUD. The property received a financial assessment score of 65 in 1999. HUD considers this score to indicate that the property is in “cautionary” financial condition. A more current financial analysis is not available because the owner failed to submit annual financial statements for calendar year 2000. Also, a June 2001 HUD record indicated that the owner had repeatedly been delinquent on his mortgage payments. HUD devoted considerable monitoring attention to this property before and after its placement on the watch list and has well documented the property’s physical and financial condition. The owner has shown his refusal to address the problems identified by HUD oversight, and tenants have complained that living conditions are deteriorating. Prior to entering the watch list, the property was referred to DEC, in February 2001, because the owner failed to resolve deficiencies identified in the November 2000 inspection and repeatedly violated his regulatory agreement by collecting funds for self-management of the property. The HUD field office had required that a new management agent be appointed due to the poor management and needed physical repairs. In April 2001, the owner’s attorney expressed the opinion that the owner should return the property to HUD because the mortgage and necessary rehabilitation costs far exceeded its appraised value, and the rent reduction would result in further deterioration. In August 2001, HUD refused to grant a request that the mortgage be forgiven as part of the owner’s attempt to donate the property to a nonprofit corporation. HUD specified repairs as the number-one priority and authorized the owner to begin addressing these maintenance items with funds normally reserved for capital replacements. In October 2001, HUD notified the owner of his failure to submit complete and correct monthly accounting reports, as required by watch-list monitoring guidelines. In December 2001, DEC mailed the owner a certified letter notifying him that he was in default on his HUD housing assistance contract and regulatory agreement. Cited violations were (1) failure to properly maintain property and respond to HUD physical inspection reports, (2) failure to file and late filing of annual financial statements, and (3) late mortgage payments. He was given 30 days to take corrective action. In January 2002, the owner met with HUD and DEC officials and left with the understanding that he must respond to all REAC inspections and provide annual financial statements and monthly accounting reports. In March 2002, HUD again notified the owner of problems with the monthly accounting reports and requested that he reimburse the project operating account $1,673 paid to himself as manager. The smaller income stream, resulting from reduced rental rates, has increased the potential risk for this property. A HUD manager was of the opinion that the property would have “made it” if restructuring had been completed, but the owner had not responded to DEC’s demand for corrective action as of April 2002. Foreclosure is the next course of action. If that occurs, then the tenants will be given relocation vouchers. Appendix II: Parkside Terrace Apartments – Washington, D.C. Parkside Terrace Apartments is a 291-unit apartment building located in the Southeast quadrant of Washington, D.C. It was built in 1966. A managing agent oversees the operations for the owner, Parkside Terrace Company Limited Partnership. The managing agent assigned an on-site manager to Parkside, which is a 12-story building consisting of 12 efficiencies, 54 1-bedroom units, 162 2-bedroom units, and 63 3-bedroom units. The area surrounding Parkside Terrace Apartments has improved during the 1990s, but overcrowding, unemployment and poverty are still problems. Two of the three public housing developments in the area have been demolished and replaced by a new development; the third was also demolished and is now a vacant lot. The area has also experienced a drop in crime. The overcrowded rate for occupied units in the area is above 25 percent. As of February 2002, only 30 percent of Parkside Terrace Apartments’ households reported having a working adult, and the median household income for all the households in the property is $15,700. Rents for 142 of Parkside Terrace Apartments’ 291 units were reduced on April 1, 2001. The rents for these units were reduced between 13 and 26 percent. Sixty-nine units are still governed by a project-based Section 8 contract that does not expire until October 2003 and will continue to receive above market rents until that time. The remaining 80 units are not receiving Section 8 subsidies (see fig. 5). On October 23, 2000, HUD’s Office of Multifamily Housing Assistance Restructuring (OMHAR) determined that Parkside Terrace Apartments was not financially viable; thus, it was ineligible for mortgage restructuring. As a result, on April 1, 2001, the property was placed on the watch list. OMHAR made this determination because the property’s physical condition assessment report, prepared by a contractor for the participating administrative entity (PAE), includes a 20-year repair plan and recommends major capital replacements and repairs that would cost more than $4.4 million in the first year. Major capital improvements have never been performed on the property. Parkside Terrace Apartments’ roof, boilers, and over 500 of its windows need to be replaced. According to OMHAR, the total cost of the needed improvements to the property is too high for OMHAR to finance under the mark-to-market program. Before OMHAR determined the property to be financially not viable, the PAE recommended that the property receive a mortgage restructuring since a rent reduction alone would leave the property with inadequate cash flow to meet its mortgage payments and finance the property’s rehabilitation cost. A representative of the PAE told us that it might have been possible to restructure the property’s debt despite the large rehabilitation cost, based on the assumption that tax credits and tax- exempt bond financing would be made available. However, the PAE did not perform any analysis to assess the feasibility of using tax credits and tax-exempt bonds for this purpose. According to HUD’s physical inspection scores, the property is in poor physical condition. Its physical inspection scores have declined over time. It received a physical inspection score of 43 on February 5, 1999, and a score of 38 on December 1, 2000, both of which HUD considers to indicate poor condition. On October 30, 2001—approximately 6 months after being placed on the watch list—Parkside Terrace Apartments’ received a physical inspection score of 21. Water damage was discovered in some of the property’s units and hallways during the October 30, 2001, physical inspection. Since the property received a score less than 30, it was referred to DEC. DEC’s contractor inspected the property in April 2002 and discovered severe water damage, caused by leaking copper pipes throughout the building. Several balconies needed to be repaired as well. Based on HUD’s Financial Assessment Subsystem (FASS), Parkside Terrace Apartments’ financial condition is cautionary. Prior to the rent reduction, the property received a financial assessment score of 69, had a debt service coverage ratio of 1.2, and a vacancy rate of about 20 percent. For the fiscal year ending December 31, 2001 (during which time the rents were reduced), Parkside Terrace Apartments received a financial assessment score of 65 and had a debt service coverage ratio of 0.7 percent (which suggests the property did not have sufficient cash flow to meet its mortgage payments) and a vacancy rate of above 22 percent. The vacancy rate has been above 14 percent since 1996. According to the managing agent, the mark-to-market rent reduction has further contributed to Parkside’s declining financial condition. According to the managing agent, the property’s expenses increased substantially in 2001. The natural gas and electricity costs increased by 300 percent and 150 percent, respectively. Parkside Terrace Apartments also experienced an increase in insurance costs as a result of the event on September 11, 2001. HUD has approved several suspensions of replacement reserve deposits and changes in the replacement reserve deposit requirements since 1994. Property owners are required to deposit money into reserve for replacement accounts, which are intended to be used to pay for capital improvements such as roofs, boilers, and windows. In 1994, HUD required that Parkside increase its replacement reserve deposit amount from $4,000 to $20,000 per month. HUD approved several consecutive suspensions to its replacement reserve, as requested by the managing agent, from July 1996 to June 1999. The managing agent used these funds to renovate 30 vacant units. In September 2001, HUD’s project manager assigned to Parkside approved a reduction of the deposit requirement from $20,000 to $3,000 per month because (1) the property experienced a significant reduction in rent under the mark-to-market program, and (2) the HUD field office determined that the existing deposit amount was too high in light of a proposal to demolish the property. HUD’s District of Columbia field office (responsible for monitoring Parkside Terrace Apartments) has partially implemented the September 2001 monitoring guidance for watch-list properties. The Parkside Terrace Apartments managing agent has submitted monthly accounting reports, and an experienced project manager has been assigned to the property. However, because HUD headquarters has not provided a format to be used for watch-list properties’ quarterly reports, the field office has not prepared them. Currently, the managing agent is attempting to purchase Parkside Terrace Apartments for demolition and planning construction of a mid-rise building for the elderly and townhouses. The townhouse portion of the proposal contains a plan for both market rate and subsidized rental townhouses. The Colony is a 17-unit apartment complex located in Providence, Rhode Island. The building, consisting of efficiencies and one- and two-bedroom units, is a former rooming house that underwent substantial rehabilitation and became part of the Section 8 elderly housing program in 1981. In 1984, the complex received a waiver from HUD to admit non-elderly residents; the complex currently has no elderly residents. The majority of residents are single females with children. All the units are currently subsidized by HUD’s Section 8 program. The Colony is in a neighborhood of Providence (South Providence) that has a history of drug and crime problems. According to the management agent, the Colony did not house elderly residents in the early 1980s because the elderly were afraid to live in this area. While the surrounding neighborhood has improved somewhat in recent years, the immediate neighborhood continues to have significant drug and crime activity, and some nearby houses are in poor physical condition. The watch-list contract took effect in October 2001. Under this contract, the rents were reduced by an average of 34 percent, and the total monthly maximum Section 8 payment from HUD decreased from $12,708 to $8,370. The owners appealed OMHAR’s rent determination; subsequently the rents were increased slightly, and the total monthly maximum payment also increased to $9,415. Ultimately, the new rents were set at 25 percent below the pre-mark-to-market levels. The rents for the Colony were reduced in October 2001. The rents on the efficiencies were reduced from $635 to $475; the rents on the 1-bedroom units were reduced from $771 to $560; and the rents on the 2-bedroom units were reduced from $910 to $695 (see fig.6). OMHAR placed the Colony on the watch list on November 2, 2001, because it determined that the property was not financially viable; thus, it was ineligible for mortgage restructuring. OMHAR made this determination primarily because the property was appraised at $267,000 but had a mortgage balance of $415,675. In addition, according to HUD and the management agent, the Colony also required approximately $200,000 in physical rehabilitation costs. The Colony received a score of 80 on its most recent physical inspection, which was conducted in September 2001 (2 months prior to placement on the watch list). This represents a marked improvement from its previous scores of 24 and 66 in 1999 and 2000, respectively. The 80 score indicated that the Colony was in satisfactory physical condition. However, according to HUD and the management agent, this property had significant problems that needed substantial rehabilitation costs. Because of the previous score of 80, the Colony is not scheduled for another physical inspection for 2 years, in September 2003. The Colony’s financial condition continues to decline. The property received financial scores of 69 in 1998, 60 in 1999, and 39 in 2000. Also, since the Colony had a debt service coverage ratio of 0.7 in 2000, there is evidence that the Colony has had difficulty meeting its financial obligations even before its rents were reduced. After the rent reduction, the Colony was experiencing greater difficulty meeting its mortgage payments and have frequently been 30 to 60 days delinquent. The HUD field office has monitored this property in accordance with current HUD guidance. Specifically, the field office has assigned the Colony to an experienced project manager, who receives and reviews monthly accounting reports. In addition, the project manager conducts regular management reviews of the property, which include site visits and reviews of the property’s physical, financial, and managerial condition. The current owners of the Colony have entered into discussions with a nonprofit group to purchase the Colony. This nonprofit group has experience in successfully rehabilitating Section 8 properties in the Providence area and is interested in rehabilitating the property and using it to house elderly tenants. According to the project manager, it would be best for the Colony to be sold to owners who are willing to contribute significant financial resources to refurbishing the property. If the property cannot be sold in the near future to an owner who is willing to pay for the substantial repairs that are needed, the project manager said he is planning to request that the tenants receive vouchers and the property be discontinued as a Section 8 property. Moreover, he said that it would not be difficult for the 17 households to find other Section 8 units in the area. Parkside Apartments is a 94-unit complex located in Gillette, Wyoming. The complex consists of 4 buildings with 24 1-bedroom units and 70 2- bedroom units. All of the units are currently subsidized by HUD’s Section 8 program. The property was initially occupied as a Section 8 property in February 1980, with its initial Section 8 contract expiring on February 28, 2001. The population of Gillette is approximately 19,000. There is very little alternative housing in Gillette, and the closest town is 75 miles away. The owners of this apartment complex also have other Section 8 properties–Acadian Manor in Lafayette, Louisiana and Mountain View Apartments in San Jose, California. In addition, the owners previously owned the Pittsburgh Plaza Apartments in Pittsburgh, California. The owners of Parkside Apartments have been charged and convicted on several counts, including fraud and conspiracy at each of the HUD subsidized properties they own. OMHAR reduced the monthly rents for Parkside Apartments in March 1, 2001, by approximately 28 percent. The rents were reduced from $568 to $405 for the 1-bedroom units; $666 to $485 for the 2-bedroom (1 bathroom) units; and $705 to $495 for the 2-bedroom (2 bathrooms) units (see fig. 7). OMHAR placed Parkside Apartments on the watch list in April 2001 because the owners were indicted on criminal charges and suspended from conducting further transactions with HUD. The owners of Parkside Apartments have a criminal history dating back to 1992. They were charged in 1996 by a California municipal court for crimes committed in 1992, including filing a false insurance claim, grand theft, and falsely reporting a crime. While one of the owners was still on bail in March 1999, the owners were indicted by the State of California for welfare fraud, health care fraud, conspiracy, and six counts of grand theft. Some of the charges resulted from the owners receiving subsidy payments for apartments that were vacant in the owners’ California Section 8 property, and the owners were subsequently suspended from further government contracting in October 1999. After receiving a tenant complaint that the owners were also billing HUD for vacant units at Parkside Apartments, HUD investigators raided the owner’s home and Parkside’s office in July 2000. The investigation revealed that the owners were receiving subsidy payments for empty apartments, just as they had in their California property. As part of the owners’ plea agreement, the owners paid over $1.4 million in restitution. REAC inspected Parkside Apartments in October 1999 and gave it a score of 90. HUD considers this score to indicate that the property is in satisfactory condition. Based on HUD’s guidance, the property should have had a physical inspection in June 2002. As of late July 2002, the inspection had not been done. The property’s financial scores have increased annually since 1999, with all of the scores in HUD’s highest performance category. The financial scores received were 73, 78, and 93 in 1999, 2000, and 2001, respectively. The HUD field office has monitored this property in accordance with current HUD guidance. The last management review was conducted in June 1999, and the property received an overall rating of unsatisfactory because the owners had not submitted annual financial statements for fiscal years 1997 and 1998. The management review also disclosed that the owners have serious unresolved internal control deficiencies in its accounting systems. In November 1998, the HUD project manager noted in the property’s file that the owner was diverting funds from Parkside Apartments to his other properties. As a result, the project manager suggested pursuing sanctions against the owner if he was not indicted by the end of January 1999. The owner was indicted in March 1999. In October 2001, the owner of Parkside Apartments entered into a settlement agreement with HUD. In this agreement, the owner agreed to be permanently disbarred from participating in any activities with the federal government. The owner also agreed to divest himself from any interest in HUD properties within 24 months from the date of the agreement. HUD’s project manager stated that the owner will most likely opt out of the Section 8 program at the end of the 24 months and compete in the private rental market. Due to the tight housing market in Gillette, the property has a possibility of succeeding in the private rental market. According to HUD officials, however, the tight housing market will make it very difficult to relocate the existing tenants if the owners sell the property or opt out of the program. New Haven Apartments is a 50-unit complex located in Athens, Texas. It was constructed in 1974. A managing agent oversees operations for the owner and an on-site manager maintains the units and collects rent. The complex consists of 8 1-bedroom, 26 2-bedroom, and 16 3-bedroom units. All of the properties’ units have Section 8 project-based assistance. There is no tenant organization, and the managing agent said he has not been aware of any tenant concerns during the mark-to-market process. However, the HUD file contains a letter to the PAE signed by 14 tenants that identifies a variety of problems, including sewage back-ups, defective air conditioners, and leaking ceilings. The occupancy rate is approximately 90 percent. Rents were reduced for New Haven Apartments on August 1, 2001. Annual rental income, based on full occupancy, was reduced approximately 7 percent. Rents on the one-bedroom units were reduced from $427 to $350; rents on the two-bedroom units were reduced from $443 to $420; and rents on the three-bedroom units were reduced from $496 to $475 (see fig. 8). OMHAR placed New Haven Apartments on the watch list because the owner rejected the mortgage restructuring plan. The management agent stated that the owner did not accept the plan because of the 25-year term mortgage agreement. Due to the age of the property (also 25 years) and the declining conditions of the neighborhood, the owner did not want to sign another 25-year term mortgage agreement. The unpaid balance on the current mortgage is approximately $475,000, with maturity scheduled for the year 2015. However, if the mortgage was restructured, the new $408,026 mortgage would not mature until the year 2026. In the absence of a mortgage restructuring, OMHAR predicts the property will experience a negative cash flow of approximately $4,200 annually and a debt service coverage ratio of 0.9. The managing agent stated that he was uncertain whether the August 1, 2001, rent reduction has had a negative impact; but as of April 2002, New Haven Apartments has been able to maintain a positive cash flow. The on- site manager said the maintenance budget has not changed and no major maintenance problems have occurred since rent reduction. REAC’s physical inspection score dropped from 81 in October 1999 to 68 in November 2001, but HUD’s project manager said he was doubtful that the rent reduction contributed to the decline in the property’s physical condition. He bases his statements on the fact that the November 2001 inspection occurred only 3 months after the rent was reduced. HUD considers a score of 68 to indicate that the property is in satisfactory condition and should be inspected on an annual basis. New Haven’s financial scores for the past 4 years have not varied significantly and were consistently in HUD’s acceptable category. The financial score for the period ending in December 31, 2001, was 78—which represents only a 3-point drop from the previous 81 score given in December 2000. In 1998 and in 1999, the scores were 87 and 78, respectively. Representatives from HUD’s Fort Worth field office (responsible for monitoring New Haven Apartments) stated that they did not implement the watch-list monitoring guidelines until we made our visit to the field office in October 2001. As required by the guidance, the managing agent has submitted the monthly accounting reports. The managing agent began submitting monthly accounting reports in October 2001, and reports through January 2002 have been submitted. The project manager stated that there has been “little or no change” in the indicators (e.g., income and vacancy rates) that HUD looks at to determine financial “health” of the property. Miyako Gardens Apartments is a 100-unit complex located in Los Angeles, California. Developed in 1981 by a limited partnership, Miyako Gardens provides affordable housing to low-and moderate-income tenants. A managing agent oversees operations at Miyako Gardens for the limited partnership. The complex consists of 90 1-bedroom and 10 2-bedroom units, all of which are subsidized by HUD’s Section 8 program. Although not established as a property for the elderly, all of the tenants are senior citizens. According to the property manager, the complex is currently operating at 100-percent occupancy and has few vacancies each year. The complex has a waiting list of approximately 3 years. Miyako Gardens is located in the “Japan Town” area east of the Central Business District in downtown Los Angeles. The immediate area has good appeal, consisting of a privately owned condominium, two subsidized apartment complexes, a Buddhist temple, and several restaurants. According to OMHAR’s rent comparison study and the property manager, there are no significant negative influences, and the immediate area is relatively safe, quiet, and drug free. Income and employment levels are average to above average in the area. In addition, property values and rents have increased over the past 2 years. Prior to the mark-to-market program, rents at Miyako Gardens were set at $907 for a 1-bedroom unit, and $967 for a 2-bedroom unit. As a result of restructuring, rents for Miyako Gardens were reduced to $745 and $930, respectively. This represents a decrease of 18 percent of revenues for the 1-bedroom units, and 4 percent for the 2-bedroom units (see fig. 9). Miyako Gardens was placed on the watch list in August 2001, but OMHAR later determined that the property should not be on the watch list because the owner decided to opt out of the Section 8 program. Since originally deciding to opt out of the Section 8 program, the owners have now indicated that they wish to remain in the program and OMHAR will determine whether the property requires a mortgage restructuring. The property received a score of 99 on its most recent physical inspection, which took place in August 1999 (2 years before the property was placed on the watch list). HUD considers this score to indicate that the property was in satisfactory physical condition. The next inspection is scheduled in August 2002. Based on our site visit, testimony from the property manager, and the rent comparison study, the property has been well managed and maintained and is in good market condition. The rent comparison study noted only very minor deferred maintenance, consisting of cosmetic touch-up items, and the property’s curb appeal was rated as better than the typical property of its generation. Miyako Gardens received a financial score of 91 for 2001. This represents an increase from its two previous scores of 82 for 1999, and 84 for 2000. HUD’s project manager assigned to the Miyako Gardens Apartments has not visited the complex since it was placed on the watch list. The project manager stated that, based on the high physical and financial scores and the absence of any “red flags,” there has not been an urgency to inspect the property. HUD’s new guidance on monitoring watch-list properties requires, among other things, that watch-list properties be assigned to an experienced project manager who monitors the property by requesting and analyzing monthly accounting reports provided by the managing agent. To date, the HUD field office has not met these requirements. The office supervisor stated that the office is short-staffed and is currently undergoing a reorganization. An experienced project manager will be assigned in the future, and the office will begin to request the monthly accounting reports. Our objectives were to answer the following questions: (1) What are the reasons properties have been placed on HUD’s watch list? (2) What is the physical condition of properties on the watch list? (3) What is the financial condition of properties on the watch list? (4) What are HUD’s procedures for monitoring watch-list properties? To assess the reasons that OMHAR places Section 8 properties on the watch list, we obtained a database extract from OMHAR’s Management Information System (MIS) as of April 15, 2002. This extract contained information on over 2,000 properties that had entered OMHAR’s portfolio since late 1998, including properties OMHAR assigned to its watch list. We focused on the reasons why OMHAR assigns properties to the watch list and summarized the various results for each cause. We also conducted telephone interviews with the agents responsible for developing the restructuring plans for a random sample of the properties on the watch list to determine why the owners of the properties did not complete the restructuring process. To determine the physical and financial condition of the watch list properties, we used OMHAR’s April 15, 2002, database extract and obtained a second database extract from HUD’s Real Estate Management System (REMS) as of June 2002. This system contained the latest complete information on watch-list properties’ physical and financial scores. We also used demographic data found in REMS to select our six case studies. To assess the physical and financial conditions of watch-list properties, we sorted this inventory into various scoring ranges and computed aggregate average physical and financial scores for OMHAR’s watch-list inventory. We compared these results with similar data for all properties that have gone through the mark-to-market program, of which there are over 2,000 properties, to determine if the physical and financial conditions of each inventory is similar. To assess the reliability of OMHAR’s data, we (1) performed electronic testing (specifically for accuracy, reasonableness, and completeness); (2) reviewed related documentation from HUD; and (3) worked closely with OMHAR officials to identify any data problems. Where we found discrepancies (such as nonpopulated fields or data-entry errors) we brought them to OMHAR’s attention and worked with these individuals to correct the discrepancies before conducting our analyses. We determined the data we used were reliable for purposes of this report. To assess the effectiveness of HUD’s monitoring procedures, we reviewed HUD’s policies and procedures for monitoring properties and discussed HUD’s implementation of the policies and procedures at selected HUD field offices. We also visited a judgmental sample of properties that are monitored by HUD offices that are geographically distributed to determine whether HUD’s monitoring procedures are sufficient to quickly detect signs of physical and/or financial deterioration of the properties. In conducting our review, we interviewed officials in HUD and OMHAR headquarters in Washington, D.C., and HUD personnel in six HUD field offices–-Providence, Rhode Island; Denver, Colorado; Jacksonville, Florida; Los Angeles, California; Washington, D.C.; and Fort Worth, Texas. In addition, we conducted a structured telephone interview with PAEs for a random sample of 60 watch-list properties. We also interviewed officials in OMHAR headquarters in Washington, D.C. and OMHAR staff in Chicago, Illinois. We performed our work from October 2001 through July 2002 in accordance with generally accepted government auditing standards. In addition to those named above, Andy Clinton, Mark Egger, Rafe Ellison, Reid Jones, John McGrail, Sara-Ann Moessbauer, Tinh Nguyen, John Shumann, Rick Smith, Mark Stover, and Alwynne Wilbur made key contributions to this report.
In 1997, Congress established the mark-to-market program to help preserve the availability and affordability of low-income rental housing while also reducing the cost to the federal government of rental assistance provided to low-income households. The mark-to-market program was developed for multifamily properties that are both insured by the Federal Housing Administration (FHA) in the Department of Housing and Urban Development (HUD) and aided through the project-based Section 8 program. Under the mark-to-market program, at the time of the assisted properties' section 8 contract renewal, HUD resets rents to prevailing market levels and restructures a property's mortgage debt, if necessary, to permit a positive cash flow. This process is designed to ensure that properties whose rents are reduced to market level still have sufficient income to meet the mortgage payments and operating expenses on the property. The Office of Multifamily Housing Assistance Restructuring (OMHAR) was established within HUD to administer the mark-to-market program. OMHAR places federally assisted, FHA-insured properties on the watch list when their rents have been reduced to market level under the mark-to-market program but their mortgages have not been restructured. Two-hundred and eleven properties have been placed on the watch list, for one of three reasons: (1) the property owners elected not to enter into or complete the mortgage restructuring process; (2) OMHAR determined that the property was not financially viable for restructuring; or (3) the property owners were disqualified from the mortgage restructuring process because of certain actions by the owner, such as financial or managerial improprieties. Eighty-seven percent of OMHAR's watch list properties received HUD inspections that indicated they were in satisfactory physical condition, but some of these inspections occurred before the properties were placed in the watch list. The timing of HUD's inspection cycle depends on the results of each property's most recent inspection. As a result, a watch list property that received a high score on its previous physical inspection may not be reinspected for up to 3 years from the last inspection. While OMHAR believes that all properties on the watch list are potentially at financial risk, HUD's Financial Assessment Subsystem--which contains information on property owners' audited annual financial statements--indicates that 62 percent of the watch list properties show signs of potential financial risk. HUD established guidance for monitoring the watch list properties 10 months ago, but it is too early to assess how effective the monitoring will be.
Funding for transit projects comes from public funds allocated by federal, state, and local governments and system-generated revenues earned by transit agencies from providing transit services. The Department of Transportation reported: (1) that in 2008, federal funds were nearly 40 percent of total transit agency capital expenditures; (2) that state funds provided approximately 12 percent; and (3) that local funds provided the remaining 48 percent of total transit agency capital expenditures. Our November 2012 report found similar funding trends. Specifically, local funding exceeded total federal funding for the 25 projects approved for federal New Starts grants—part of FTA’s Capital Investment Grant Program—from October 2004 through June 2012.important part of this picture, and according to FTA, MAP-21 authorized federal funding for public transit—$10.6 billion for fiscal year 2013 and $10.7 billion for fiscal year 2014. However, while state and localities face their own funding challenges, MAP-21 did not address long-term transportation federal funding challenges. Federal funds available for the FTA’s transit programs come from two sources: (1) the general fund of the U.S. treasury and (2) the Mass Transit Account of the Highway Trust Fund. Both of these sources of federal funding face difficulties. Currently, congressional budget discussions raise issues about general fund federal spending. This affects transit programs, such as the Capital Investment Grant Program, which are funded through annual appropriations from the general fund. In addition, the Highway Trust Fund authorizes funds for transit programs primarily through statutory formulas, and there are concerns over the fund’s decreasing revenue. The primary mechanism for funding federal highway and transit for more than 50 years is the Highway Trust Fund, which is funded through motor fuel and other highway use taxes. These taxes were established to make the federal-aid highway program self- financing—that is, paid for by the highway users who directly benefit from the program. For many years, user fees in the form of federal fuel taxes and taxes on commercial trucks provided sufficient revenues to the Highway Trust Fund; however, revenues into the fund have eroded over time, in part because federal fuel tax rates have not increased since 1993 and in part because of improvements in vehicle fuel efficiency. In May 2013, the Congressional Budget Office estimated that to maintain current spending levels plus inflation between 2015 and 2022, the Highway Trust Fund will require over $132 billion more than it is expected to take in over that period. About $35 billion of that deficit would be in the transit account. To maintain current spending levels and cover revenue shortfalls, Congress has transferred more than $50 billion in general revenues to the Highway Trust Fund since fiscal year 2008. This approach has effectively broken the link between taxes paid and benefits received by users and may not be sustainable given competing demands and the federal government’s growing fiscal challenge. As we have previously reported, this trend will continue in the years ahead as more fuel efficient and alternative fuel vehicles take to the roads. We have previously concluded that a sustainable solution to funding surface transportation is based on balancing revenues to and spending from the Highway Trust Fund. Ultimately, major changes in transportation revenues, spending, or both will be needed to bring the two into balance. For this and other reasons, and because MAP-21 did not address these issues, funding surface transportation remains on GAO’s High-Risk List. Our recent work describes how sound capital-investment decisions can help transit agencies use federal and other transit funds more efficiently, and MAP-21’s new requirements for transit agencies to use asset management are consistent with our recent findings. Improved transit asset management is important because of (1) the large backlog of transit assets—such as buses, rail cars, elevators, and escalators—that are already beyond their useful lives; (2) increasing demand for transit services; and (3) financial strains on transit providers due to rising fuel prices, decreased state and local funding, and likely limitations of federal funding going forward. According to FTA, roughly $78 billion (in 2009 dollars) would be necessary to cover the costs of rehabilitating or replacing the nation’s transit assets and bring them to a state of good repair. Sound asset-management practices can help agencies prioritize their capital investments to help optimize limited funding. We reviewed agencies by conducting site visits and interviews, examining documents, and consulting relevant literature. We selected agencies for review in two ways: 1) using a selection process for transit-agency site visits, and 2) reviewing transit agency case studies included in two key reports we identified through a comprehensive literature review. decisions. Transit agencies that measure and quantify the effects of their capital-investment decisions are likely to make a stronger case for additional funding from state and local decision-makers. However, of the nine transit agencies we visited, only two measured the effects of capital investments on the condition of certain transit assets and none of the agencies measured the effects on future ridership, in part because they lacked the tools to determine these effects. Figure 1 below shows the extent to which selected transit agencies measured the effect of capital investments. Accordingly, we recommended that the Administrator of FTA conduct additional research to help transit agencies measure the effects of capital investments, including future ridership effects. The FTA concurs with this recommendation, in part. FTA agrees that more research to identify the operational impacts of not addressing the state of good repair backlog will support better asset management by transit agencies. However, according to FTA officials, given the agency’s current budget situation, it is difficult for it to commit to conduct additional research in the near future. FTA has almost $10 million in research projects on transit asset management underway. MAP-21 directed FTA to provide transit agencies with tools and guidance they need to help them better prioritize capital investment decisions. MAP-21 also directed FTA to develop asset management requirements for all recipients of federal transit program funds, including a transit asset management plan, which must include at a minimum, capital asset inventories, condition assessments, and investment priorities. Since the enactment of MAP-21, FTA has been developing guidance to help transit agencies implement leading practices in transit asset management and a decision support tool to prioritize investments. FTA also issued an advance notice of proposed rulemaking (ANPRM) in October 2013 and requested that comments be submitted to them by January 2, 2014. The ANPRM states that FTA is seeking to ensure public transportation systems are in a state of good repair and transit agencies provide increased transparency into their budgetary decision-making process. FTA is seeking public comment on, among other things, (1) proposals it is considering and (2) questions regarding the following: the requirements of a National Transit Asset Management System, including four options for defining and measuring state of good repair, and the relationship between safety, transit asset management, and state of good repair. As FTA completes its analysis of these comments and further develops a National Transit Asset Management System, transit agencies may be better equipped to implement current leading practices in transit asset management and comply with future transit asset management requirements envisioned by MAP-21. In addition to maintaining transit agencies’ existing assets in a state of good repair, some transit agencies also face a need to build and expand their systems to meet demand. To meet these needs in a financially constrained environment, transit agencies can apply for capital funding available from the federal government through the Capital Investment Grant Program, which includes New and Small Starts grants. In many cases, transit agencies have taken advantage of this federal funding to develop bus rapid transit (BRT) projects, which often require less capital investment than other transit modes. For example, New York implemented a BRT project for the M15 line. This BRT line provides critical transportation service in Manhattan for over 55,000 riders a day, connecting many neighborhoods that are a long walk from the nearest subway station. Thus transit agencies are able to meet transit demand with BRT projects with a lower initial capital investment than other modes of transit, like heavy rail. Specifically, we found in our 2012 report that median costs for the 30 BRT and 25 rail transit projects we examined from fiscal year 2005 through February 2012 were about $36.1 million and $575.7 million, respectively. Pub. L. No. 109-59, 119 Stat. 1144 (Aug. 10, 2005). projects. According to all of the five BRT project sponsors we spoke with during our work, even at a lower capital cost, BRT could provide rail-like benefits. For example, Cleveland RTA officials told us the Healthline BRT project cost roughly one-third ($200 million) of what a comparable light- rail project would have cost. Similarly, Eugene, Oregon, Lane Transit District (LTD) officials told us that the agency pursued BRT when it became apparent that light rail was unaffordable and that an LTD light rail project would not be competitive in the New Starts federal grant process. In terms of benefits, these projects—and most other BRT project we examined—increased ridership and improved travel times over the previous bus service. As a result of the lower initial capital costs for BRT along with the benefits of improved service, transit agencies took advantage of federal New and Small Starts dollars to invest in a relatively large number of BRT projects, as compared to other modes of transit. (See fig. 2). In addition, we found that although many factors contribute to economic development, most local officials in the five case study locations we visited believed that BRT projects were contributing to localized economic development. For instance, officials in Cleveland told us that an estimated $4 to $5 billion had been invested near the Healthline BRT project— associated with major hospitals and universities in the corridor. While most local officials believed that rail transit had a greater economic development potential than BRT, they agreed that certain factors can enhance BRT’s ability to contribute to economic development, including physical BRT features that relay a sense of permanence to developers; key employment and activity centers located along the corridor; and local policies and incentives that encourage transit-oriented development. Our analysis of land value changes near BRT lines at our five case study locations lends support to these themes. MAP-21 included a few changes that affected BRT. For example, MAP-21 defined BRT more narrowly and specifically than SAFETEA-LU. Specifically, MAP-21 required that BRT projects include features that emulate the services provided by rail, including defined stations rather than bus stops. This is consistent with our work, as we found that including rail-like features appears to lead to increased economic development along BRT corridors. In addition, MAP-21 made a distinction between BRT projects that are eligible for New Starts versus Small Starts funding. Effective federal coordination can help maximize limited resources, while still providing essential services—especially to transportation- disadvantaged populations, including those who cannot provide their own transportation or may face challenges in accessing public transportation due to age, disability, or income constraints. We have previously reported that transportation-disadvantaged populations often benefit from greater and higher quality services when transportation providers coordinate their operations. Additionally, as we reported in our findings on duplicative efforts and programs, improved coordination of these programs and transportation services has the potential to improve the quality and cost- effectiveness of these services, while also reducing duplication, overlap, and fragmentation of services. However, effective coordination can be challenging, as federal programs provide funding under a variety of services, including education, employment, and medical and other human services. Our 2012 report on transportation-disadvantaged populations found that 80 federal programs in eight different agencies fund a variety of transportation services. While some federally funded programs are transportation focused, transportation was not the primary mission for the vast majority of the programs we identified. For example, the Department of Health and Human Services’ Medicaid program reimburses states that provide Medicaid beneficiaries with bus passes, among other transportation options, to access eligible medical services. Total federal spending on services for transportation-disadvantaged populations remains unknown because federal departments did not separately track spending for roughly two-thirds of the programs we identified. Our June 2012 report also concluded that insufficient federal leadership and lack of guidance for furthering collaborative efforts might hinder the coordination of transportation services among state and local providers. Officials in each of the five states we selected for interviews said that the federal government could provide state and local entities with improved guidance on transportation coordination—especially related to instructions on how to share costs across programs (i.e., determining what portion of a trip should be paid by whom). To promote and enhance federal, state, and local coordination efforts, we recommended in 2012 that the Secretary of Transportation, as the chair of the Interagency Coordinating Council on Access and Mobility (Coordinating Council), along with the Coordinating Council’s member agencies, should meet and complete and publish a strategic plan outlining agency roles and responsibilities and articulate a strategy to help strengthen interagency collaboration and communication. Also, the Coordinating Council should report on the progress of its prior recommendations and develop a plan to address any outstanding recommendations. DOT agreed to consider our recommendation and the Coordinating Council’s member agencies responded by issuing a strategic plan for 2011–2013, which established agency roles and responsibilities and identified a shared strategy to reinforce cooperation, and officials have indicated they will continue to take steps to implement our recommendations. FTA has made some progress in enhancing coordination for transportation-disadvantaged populations. According to FTA officials, as a result of MAP-21, the agency has been updating program guidance and has issued draft program circulars for its Urbanized Area Formula Program, Enhanced Mobility for Seniors and Individuals with Disabilities Program, and the Rural Areas Formula Program, all of which discuss coordinated transit programs, among other issues. In addition, FTA continues to support federal programs that play an important role in helping transportation-disadvantaged populations by providing funds to state and local grantees that, in turn, offer services either directly or through private or public transportation providers. Further, some FTA programs require or encourage their grantees to coordinate transportation services. For example, FTA’s Enhanced Mobility of Seniors and Individuals with Disabilities program—which provides formula funding to states to serve the special needs of transit-dependent populations beyond traditional public-transportation service—requires grantees to coordinate their transportation services and establish locally developed, coordinated public transit-human services transportation plans. We continue to examine these funding, service delivery, and coordination issues. Chairman Johnson, Ranking Member Crapo, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition, to the contact named above, Cathy Colwell, Geoffrey Hamilton, Hannah Laufe, Sara Ann Moessbauer, Tina Paek, Stephanie Purcell, and Amy Rosewarne made key contributions to this statement. Transportation-Disadvantaged Populations: Coordination Efforts are Underway, but Challenges Continue. GAO-14-154T. Washington, D.C.: November 6, 2013. Transit Asset Management: Additional Research on Capital Investment Effects Could Help Transit Agencies Optimize Funding. Washington, D.C.: July 11, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. ADA Paratransit Services: Demand Has Increased, but Little is Known about Compliance. GAO-13-17. Washington, D.C.: November 15, 2012. Public Transit: Funding for New Starts and Small Starts Projects, October 2004 through June 2012. GAO-13-40. Washington, D.C.: November 14, 2012. Bus Rapid Transit: Projects Improve Transit Service and Can Contribute to Economic Development. GAO-12-811. Washington, D.C.: July 25, 2012. Transportation-Disadvantaged Populations: Federal Coordination Efforts Could Be Further Strengthened. GAO-12-647. Washington, D.C.: June 20, 2012. Government Operations: Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Millions of passengers use transit services on a daily basis, and many transit agencies that provide these services receive federal funding. To meet the needs of these passengers in a challenging economy, transit agencies must use federal and other resources wisely, while ensuring quality service. The July 2012 surface transportation reauthorization act--MAP-21--has addressed a number of transit issues by strengthening federal authority to oversee transit safety and emphasizing the restoration and replacement of aging infrastructure, among other things. While it is too early to assess all of the impacts of MAP-21, the work GAO has done can help inform the next surface transportation reauthorization act. This testimony covers GAO's recent work on: (1) funding transit; (2) improving capital decision making; and (3) coordinating services for transit-disadvantaged populations. To address these objectives, GAO drew from its recent reports issued from March 2011 through November 2013. GAO has also analyzed MAP-21, recent rulemaking, and other reports. The Moving Ahead for Progress in the 21st Century Act (MAP-21) authorized $10.6 and $10.7 billion for fiscal years 2013 and 2014, respectively, for public transit, but did not address long-term funding. Federal funds available for FTA's transit programs come from the general fund of the U.S. Treasury and the Mass Transit Account of the Highway Trust Fund. The Highway Trust Fund supports surface transportation programs, including highways and transit, and is funded through motor fuel and other highway use taxes; however, revenues have eroded over time because federal fuel tax rate stagnation, fuel efficiency improvements, and the use of alternative fuel vehicles. In May 2013, the Congressional Budget Office estimated that to maintain current spending levels plus inflation between 2015 and 2022, the Fund will require over $132 billion more than it is expected to take in over that period. GAO reported that while Congress transferred over $50 billion in general revenues to the Fund since fiscal year 2008, this approach may not be sustainable given competing demands for funding. For these reasons funding surface transportation remains on GAO's High-Risk List. To address these funding challenges, sound capital-investment decisions can help transit agencies use their funds more efficiently. GAO's work on transit asset management and bus rapid transit has illustrated these benefits. Transit asset management : According to the Federal Transit Administration (FTA), it would cost roughly $78 billion (in 2009 dollars) to rehabilitate or replace the nation's aging transit assets--such as buses, rail cars, and escalators. GAO's 2013 report on asset management recognized that many of the nearly 700 public transit agencies struggle to maintain their bus and rail assets in a state of good repair. Sound management practices can help agencies prioritize investments to help optimize limited funding. However, of the nine transit agencies GAO visited, only two measured the effects of capital investments on asset condition and none measured the effects on future ridership. Thus, GAO recommended additional research to measure the effects of capital investments; FTA concurs in part with this recommendation. FTA agency officials recognize the importance of additional research; however, they are hesitant to commit additional resources given their current budget situation. Bus rapid transit (BRT) : In addition to maintaining assets, transit agencies often need to build or expand systems to meet demand. Transit agencies can apply for federal capital-investment funding for new projects through New and Small Starts and Core Capacity Improvement grants. GAO's 2012 report found that many agencies had taken advantage of New and Small Starts funding to develop BRT projects, which generally require less capital investment compared to rail. GAO's recent work also shows benefits from coordinating transit services for the transportation-disadvantaged--those who cannot provide their own transportation or face challenges accessing public transportation. GAO's 2012 report pointed out that coordination can be challenging, as federal programs provide funding for a variety of services. GAO also concluded that insufficient federal leadership and guidance on coordinating services for the disadvantaged may hinder coordination among state and local providers. The Coordinating Council--a group of federal agencies providing these services--has completed a strategic plan to strengthen interagency coordination, as GAO recommended. GAO made recommendations on these issues in previous reports. The Department of Transportation agreed to consider these recommendations and is in various stages of implementing them.
Concerned about the cost of AFDC, the Congress established the CSE program in 1975 as Title IV-D of the Social Security Act to help families obtain the financial support that noncustodial parents owe their children and to help single-parent families achieve or maintain economic self-sufficiency. It was anticipated that government welfare expenditures would be reduced by recouping AFDC benefits from noncustodial parents’ child support payments. In addition, earlier enforcement of child support obligations for families not receiving AFDC would prevent such families from needing government support. CSE services provided through the program include locating noncustodial parents; establishing paternity and support orders; updating support orders to be current with a noncustodial parent’s income; obtaining medical support, such as medical insurance, from noncustodial parents; and collecting ongoing and past-due support payments. All AFDC recipients are required to participate in the CSE program so that the federal and state governments may recover some portion of the AFDC benefits paid to families. In the case of non-AFDC families, participation in the program is voluntary and most collections are distributed to custodial parents. The federal and state governments retain collections on AFDC cases as recoupment of AFDC benefits paid to families. More specifically, the government retains all past-due support collected and all but $50 of each month’s current support collected on AFDC cases, up to the amount of the family’s monthly AFDC benefits. If the current support collected together with family income makes families ineligible for AFDC, all current support is distributed to the family and the monthly AFDC benefit is not paid. The federal and state governments share retained collections on AFDC cases by the same percentage as they funded AFDC benefits to families in the state. The percentage of AFDC benefit payments that is funded by the federal government is inversely related to state per capita income and varies from state to state, ranging from 50 percent in states with high per capita incomes, such as California, to close to 80 percent in a state with relatively low per capita income, such as Mississippi. Collections on non-AFDC cases, though generally not retained by the federal and state governments, might indirectly benefit them. The receipt of these collections by non-AFDC families might preclude the need for these families to seek AFDC benefits, thus enabling the governments to avoid incurring the cost of paying AFDC benefits. Under the CSE funding structure, the federal government reimburses states for 66 percent of their CSE administrative costs for both AFDC and non-AFDC services. States are responsible for the remaining 34 percent. The federal government also pays performance incentives to states on the basis of their efficiency in collecting support on both AFDC and non-AFDC cases. These incentives are calculated separately for AFDC and non-AFDC collections. Collection efficiency is determined by dividing AFDC and non-AFDC collections each by total administrative costs. Incentives are paid on the basis of the resulting ratios and range from 6 percent of collections for ratios less than 1.4 to 10 percent of collections for ratios of 2.8 or higher. In practice, all states earn at least 6 percent on AFDC and non-AFDC collections. The total amount of non-AFDC incentives paid, however, is limited to 115 percent of the amount of incentives paid for AFDC collections. The incentive formula seeks to ensure that states provide equitable treatment for both AFDC and non-AFDC families. All but two states had reached the 115-percent cap on non-AFDC incentives in fiscal year 1994. The federal and state governments’ net financial revenues or costs from the CSE program are determined by their respective share of (1) AFDC collections retained, (2) CSE administrative costs incurred, and (3) performance incentives paid or received for both AFDC and non-AFDC collections. Privatized child support contracts in the states cover one or more services and, in general, either supplement state or local program efforts or replace them with privatized offices. As we reported in our November 1995 report, one or more child support services had been privatized statewide in 20 states and at the local office level in 18 states as of October 1995. There were 21 contracts for full-service child support operations, 41 contracts for collections and related parent location services, 9 contracts for payment processing services, and 8 contracts for location services only. Most of these services were being provided by four major contractors. As evident from our November 1995 report, the most widely privatized service was for the collection of support payments. Services provided under the 41 contracts for support payment collection are typically those performed by debt-collection agencies. These include sending letters and making telephone calls to persons owing support, often after searching various sources, such as credit bureaus, utility companies, and telephone books, to locate parents and obtain their current addresses and telephone numbers. Under the terms of most collection contracts, contractors are paid only if collections are made. Payments to contractors are often calculated as a percentage of collections—on both AFDC and non-AFDC cases. The payment rates identified for collection contracts in our November 1995 report range from about 8 percent to 24 percent and largely depend on factors such as contract case volume, case collection difficulty, type of cases referred (AFDC or non-AFDC), and the use of multiple or single contractors. States are eligible for federal reimbursement of 66 percent of the payments to contractors as CSE administrative costs. When states contract with private firms to provide child support collection services for portions of their caseloads, they often do so to help service their growing caseloads. Some have found it difficult to hire additional staff in an environment of staff and budgetary constraints brought about by increased pressures to downsize government. Recent estimates of CSE caseloads nationwide range from 300 cases to as many as 2,500 cases per worker. In 1994, states were able to collect only 55 percent of support due that year and only 7 percent of support due from prior years. State CSE officials said that contracting with the private sector allows them to service portions of their caseloads without hiring additional staff and to obtain support payments they have been unable to collect. For example, an official from Virginia told us that past-due AFDC cases are sent to contractors because state staff rarely have time to work them. Also, an official from New Mexico said that cases the state sends to contractors for collection services are ones on which the state would not try to collect, believing them difficult to collect and, therefore, not cost-effective to pursue. Another reason that state officials cited for privatizing was that contracting collections allows their staff to concentrate on paternity and order establishment, functions that the officials believed state employees are more adept at handling than collections. Similarly, some state officials believed that collection agencies have greater expertise and proficiency at collections than state employees. States are predominantly privatizing collections of past-due support. Of the 41 collection service contracts identified in our November 1995 report, 35 provided for collection of past-due support; 12 of these focused strictly on collecting past-due support for AFDC cases, while the remainder provided for collection of past-due support for both AFDC and non-AFDC cases. Of the remaining 6 contracts, 3 provided for collection of both current and past-due support for AFDC and non-AFDC cases and 3 allowed individual caseworkers discretion to decide what type of child support cases to send to collection contractors. All nine states we reviewed had criteria for selecting cases to refer for private collection services that were intended to identify cases on which support was hard-to-collect or uncollectible. All the criteria specified minimum periods of time for which collections had not been made, minimum accumulated amounts of past-due support, or both. For example, in Missouri, the criteria specified that cases with at least 6 months of support past-due that was in an amount in excess of $500 and for which no payments had been made in a year should be referred to the contractor. In addition to the minimum time and past-due support criteria, Kansas and Idaho referred only closed AFDC cases—those involving custodial parents who were not currently receiving AFDC, but in which the noncustodial parent owed support to the state from prior periods when the state paid AFDC benefits to the custodial parent. An official from Kansas said that closed AFDC cases referred for collection are ones the state had tried unsuccessfully for several years to collect on and were not currently receiving attention. State decisions about what types of cases to refer for privatized collection determine whether and the extent to which families and the federal and state governments benefit from collection contracts. Collections on AFDC cases benefit governments directly because they retain some of the support collected, while collections on non-AFDC cases benefit families directly because most collected support is distributed to them. Whether the federal or state governments experience net CSE revenues or costs from collection contracts is principally affected by (1) AFDC cost-sharing ratios, (2) states’ efficiency in making collections that earn incentives under the CSE program, and (3) the CSE administrative cost-sharing ratio. We did not assess whether the contracts were cost-effective compared to increased state efforts to collect. The federal and state governments benefited from collections under all 11 contracts that we analyzed because all the states involved referred AFDC cases for collection, as shown in table 1. On AFDC cases, the federal and state governments retained all collections of past-due support and all but $50 of current support collected up to the amount of each families’ monthly AFDC benefit. Furthermore, states earned performance incentives from the federal government on both AFDC and non-AFDC collections. Families also benefited from collections under five contracts that collected on AFDC, non-AFDC, or both types of cases. Contractors in Maryland and Michigan collected support that was distributed to both AFDC and non-AFDC families, while the contractor in Missouri collected support distributed to only AFDC families and the contractor in Texas to only non-AFDC families. As illustrated in figure 1, the net financial revenues or costs of the CSE program to the federal government are equal to its share of retained AFDC collections, minus performance incentives paid states, minus its share of CSE administrative costs. For state governments, the computation is the same except that performance incentives are added instead of subtracted. Retained collections are calculated by multiplying the federal or state government’s AFDC cost-sharing ratio by AFDC collections reduced by the amounts passed through to families. For example, if AFDC collections were $100,000 and $18,000 was passed through to families, the remaining $82,000 in collections would be available for sharing by the federal and state governments. If the federal government’s AFDC cost share in the state was 60 percent, the federal government’s retained collections would equal 60 percent of $82,000, or $49,200. The state’s share would be 40 percent of $82,000, or $32,800. The performance incentives are calculated by computing the state’s collection efficiency ratios for AFDC and non-AFDC collections to determine the percentage of incentives earned, then multiplying the earned percentages by the associated type of collections—most states earn 6 percent incentives and have reached the 115-percent cap on non-AFDC incentives. For example, if AFDC collections were $100,000 as above, non-AFDC collections $400,000, and total administrative expenses $125,000, the collection efficiency ratio for AFDC collections would equal 0.8 ($100,000 in collections divided by $125,000 in administrative expenses). Collection efficiency ratios lower than 1.4 earn 6 percent AFDC incentives; therefore, AFDC incentives in this example would equal 6 percent of $100,000, or $6,000. The non-AFDC collection efficiency ratio in this example equals 3.2, $400,000 divided by $125,000. This ratio would earn incentives of 10 percent of collections. However, since non-AFDC incentives cannot exceed 115 percent of AFDC incentives, non-AFDC incentives that can be received in this example would be limited to 115 percent times $6,000, or $6,900. Thus, the federal government would pay the states $12,900 in performance incentives on the $500,000 in collections. Contract costs are calculated by multiplying the contract percentage rate to be paid the contractor for collections by total collections. Continuing the above example, with total collections of $500,000 and a payment rate of 25 percent, contract costs would equal $125,000. The federal government would reimburse the states for 66 percent of these costs, or $82,500. Accordingly, in this example, the federal government would experience net CSE costs of $46,200, after receiving $49,200 in retained AFDC collections and paying $12,900 in performance incentives and $82,500 in contract costs. The state on the other hand would experience net CSE revenues of $3,200, after receiving $32,800 in retained AFDC collections and $12,900 in performance incentives and paying $42,500 in costs. Table 2 summarizes the several factors that affect the calculation of the federal and state governments’ respective shares of retained collections, performance incentives, and contract costs. As shown in table 1, collections under 10 of the 11 contracts we analyzed generated net CSE financial revenues for both the federal and state governments. The federal government’s net revenues were less than the states’ under the seven contracts in Kansas, Maryland, Michigan (number 4), Missouri, Nevada, Texas, and Virginia and greater than the states’ under the three contracts in Idaho and New Mexico. Under one contract in Michigan (number 5), the federal government experienced net CSE costs, while the state experienced net revenues. The influence of case type on retained collections and of total collections on contract costs can be seen in the outcomes under the two contracts in Michigan. As shown in table 1, under contract number 5, the federal government experienced net CSE costs in part because most of the collections were for non-AFDC support, none of which was retained by the federal or state governments. Furthermore, the non-AFDC collections were about six times as great as AFDC collections, contributing to higher contract costs but not retained collections. Consequently, under this contract, the federal government’s share of retained AFDC collections was not large enough to offset its share of contract costs and performance incentives paid to the state based on AFDC and non-AFDC collections. In contrast, under contract number 4 in Michigan, even though non-AFDC collections were greater than contract number 5, the federal government experienced net CSE financial revenues. This occurred because AFDC collections were a larger share of total collections than under the other contract. In addition, contract costs as a percentage of collections were lower on contract number 4—8 percent and 3 percent compared with 14 percent and 12 percent of collections on contract number 5. The influence of whether collections are AFDC or non-AFDC is also apparent in the outcomes of the contracts in Nevada and Kansas. Although contract costs as a percentage of total collections in these two states were relatively high—41 percent to 65 percent, respectively—both the federal and state governments experienced net revenues because all collections under the contracts were past-due AFDC, which are fully retained by the governments. Costs reported to us for these two contracts included state costs for computer programming and administering the contract in addition to the percentage of collections paid the contractor. Another factor influencing the financial outcomes of collection contracts is the AFDC cost-sharing ratio, as illustrated by the financial outcomes under the contracts in Idaho and New Mexico. Under the three contracts in these states, the federal government’s net CSE revenues were greater than those of the states, largely because the federal share of retained AFDC collections was relatively high—ranging from 70 percent to 73 percent. The influence of AFDC cost-sharing ratios on retained collections and of AFDC or non-AFDC collections on contract costs also can be seen in comparing the financial outcomes from the contracts in Maryland and Texas for fiscal year 1995. The federal government gained less net revenue under the contract in Maryland than in Texas. One reason for this result is that the federal government’s share of retained AFDC collections was less in Maryland than in Texas—50 percent compared with 64 percent. In addition, contract costs were higher under the contract in Maryland because total collections were greater and non-AFDC collections (not retained) were greater than AFDC collections (retained) by a ratio of about 3 to 2, thus contributing to higher contract costs but not retained collections. The financial outcomes of collection contracts for families and government will be impacted by changes to be implemented under recent welfare reform legislation—the Personal Responsibility and Work Opportunity Reconciliation Act of 1996. For example, among several such changes, after September 2000, for families that are no longer receiving government assistance, collection of past-due support that accrued before or after the family received such assistance generally will be distributed first to the family. Furthermore, the pass-through to families of the first $50 of current support payments collected will no longer be mandatory. If states choose to continue to pass-through the $50 and disregard it in determining the income of families receiving assistance, the states must pay for the disregard with state funds. The legislation also affects the incentive payments that states receive. It directs the Secretary of HHS in consultation with the states to develop a new performance incentive system to replace, in a revenue neutral manner, the existing system. The legislation requires the Secretary to report on the new system to the Congress by March 1, 1997, and makes the new system effective on October 1, 1999. In commenting on a draft of this report, HHS said that it believes that our report should be a useful reference to states as they consider privatizing child support functions. HHS also provided technical comments that we incorporated in the final report as appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Subcommittee on Human Resources, Committee on Ways and Means; the Secretary of HHS; and HHS’ Assistant Secretary for Children and Families. We will also make copies available to others on request. We will continue to keep you and your staff informed of our progress in reviewing state CSE privatization initiatives. If you or your staff have any questions about this report, please contact David P. Bixler, Assistant Director, at (202) 512-7201 or Catherine V. Pardee, Senior Evaluator, at (202) 512-7237. Using contract cost and collection data provided by state and local CSE offices, we determined the financial outcomes for 11 collection contracts by calculating (1) collections distributed to families and retained by the federal and state governments and (2) net CSE financial revenues or costs for the federal and state governments. The net CSE financial revenues or costs to the governments equal the federal or state governments’ respective share of (1) retained collections, (2) performance incentives paid by the federal government and received by states, and (3) contract costs. We did not independently verify the contract cost and collection data provided by states. We sought data only on collection contracts listed in our November 1995 report in which payment terms were disclosed and stated as a percentage of collections, the most common method of payment in collection contracts. Although we sought data on more than 11 contracts, cost and collection data available from some states were insufficient to determine how support collected was distributed between families and the federal and state governments. Specifically, some states could not separately identify amounts of collections on non-AFDC and AFDC cases and the total amount of current AFDC support distributed to AFDC families on cases with collections. For these reasons, our data analysis and interviews were limited to 11 contracts in nine states: Idaho, Kansas, Maryland, Michigan, Missouri, Nevada, New Mexico, Texas, and Virginia. Our calculation of net CSE financial revenues or costs constitutes a comparison of additional collections with additional collection costs. Support collected under the 11 collection contracts was classified by the state programs as uncollectible or expected to be uncollectible, and we assumed that collections under the contracts would not have been made by the states. Payments to the contractors represented additional costs that the states invested in collection efforts on cases under the contract. We did not attempt to determine whether the states would have spent more or less to collect the amounts using state employees or through other means. With the exception of two states, the contract cost data that states provided included only the payments to contractors based on a percentage of collections. Additional state costs associated with the collection contracts, such as for contract negotiation and administration, could not be determined and were not included in our calculations. In calculating the governments’ respective share of retained collections, we used the AFDC cost-sharing ratios for each state for the same year as the collection contracts. In calculating performance incentives, we used statewide collection efficiency ratios for the states for 1994 as reported in data compiled by OCSE. We performed our work from November 1995 to August 1996 in accordance with generally accepted government auditing standards. Child Support Enforcement: States and Localities Move to Privatized Services (GAO/HEHS-96-43FS, Nov. 20, 1995). Child Support Enforcement: Opportunity to Reduce Federal and State Costs (GAO/T-HEHS-95-181, June 13, 1995). Child Support Enforcement: Families Could Benefit From Stronger Enforcement Program (GAO/HEHS-95-24, Dec. 27, 1994). Child Support Enforcement: Federal Efforts Have Not Kept Pace With Expanding Program (GAO/T-HEHS-94-209, July 20, 1994). Child Support Enforcement: Credit Bureau Reporting Shows Promise (GAO/HEHS-94-175, June 3, 1994). Child Support Enforcement: States Proceed With Immediate Wage Withholding; More HHS Action Needed (GAO/HRD-93-99, June 15, 1993). Child Support Assurance: Effect of Applying State Guidelines to Determine Fathers’ Payments (GAO/HRD-93-26, Jan. 23, 1993). Child Support Enforcement: Timely Actions Needed to Correct System Development Problems (GAO/IMTEC-92-46, Aug. 13, 1992). Child Support Enforcement: Opportunity to Defray Burgeoning Federal and State Non-AFDC Costs (GAO/HRD-92-91, June 5, 1992). Interstate Child Support: Wage Withholding Not Fulfilling Expectations (GAO/HRD-92-65BR, Feb. 25, 1992). Interstate Child Support: Mothers Report Less Support From Out-of-State Fathers (GAO/HRD-92-39FS, Jan. 9, 1992). Child Support Enforcement: A Framework for Evaluating Costs, Benefits, and Effects (GAO/PEMD-91-6, Mar. 5, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on states' use of private agencies for the collection of child support payments, focusing on: (1) why states contract for these collection services; and (2) the factors affecting the financial outcomes of collection contracts for families and the federal and state governments. GAO found that: (1) states contract with private agencies to collect past-due or hard-to-collect child support payments because they are finding it increasingly difficult to service their growing child support enforcement caseloads with available staff and budget resources; (2) under the terms of most collection contracts, states pay contractors only if collections are made, and contractor payments are often a fixed percentage of collections; (3) the federal and state governments retain most of the child support payments collected for families receiving Aid to Families with Dependent Children (AFDC) benefits, while non-AFDC families receive most of the support payments collected; (4) the federal government's share of the child support collections depends on how much it contributes to the state's welfare program and how much it pays in performance incentives and child support enforcement administrative costs; and (5) a review of 11 contracts showed that the federal government's financial outcomes ranged from a net cost of about $242,000 to revenues of $1.2 million.
Since the 1940s, one mission of DOE and its predecessor agencies has been processing uranium as a source of nuclear material for defense and commercial purposes. A key step in this process is the enrichment of natural uranium, which increases its concentration of uranium-235, the isotope of uranium that undergoes fission to release enormous amounts of energy. Before it can be enriched, natural uranium must be chemically converted into uranium hexafluoride. The enrichment process results in two principal products: (1) enriched uranium hexafluoride, which can be further processed for specific uses, such as nuclear weapons or fuel for nuclear power plants; and (2) leftover “tails” of uranium hexafluoride. These tails are also known as depleted uranium because the material is depleted in uranium-235 compared with natural uranium. Since 1993, uranium enrichment activities at DOE-owned uranium enrichment plants have been performed by the U.S. Enrichment Corporation (USEC), formerly a wholly owned government corporation that was privatized in 1998. However, DOE still maintains over 700,000 metric tons of depleted uranium tails in about 63,000 metal cylinders in storage yards at its Paducah, Kentucky, and Portsmouth, Ohio, enrichment plants (see figure 1). It must safely maintain these cylinders because the tails are dangerous to human health and the environment. Uranium hexafluoride is radioactive and forms extremely corrosive and potentially lethal compounds if it contacts water. In addition, DOE also maintains large inventories of natural and enriched uranium that are also surplus to the department’s needs. Tails have historically been considered a waste product because considerable enrichment processing is required to further extract the remaining useful quantities of uranium-235. In the past, low uranium prices meant that these enrichment services would cost more than the relatively small amount of uranium-235 extracted would be worth. However, an increase in uranium prices—from approximately $21 per kilogram of uranium in the form of uranium hexafluoride in November 2000 to about $160 per kilogram in May 2011—has potentially made it profitable to re-enrich some tails to further extract uranium-235. Even with the current higher uranium prices, however, only DOE’s tails with higher concentrations of uranium-235 (at least 0.3 percent) could be profitably re- enriched, according to industry officials. DOE’s potential options for its tails include selling the tails “as is,” re- enriching them, or storing them indefinitely. However, DOE’s legal authority to sell the tails in their current form is doubtful. We found that DOE generally has authority to carry out the re-enrichment and storage options. As we said earlier, DOE issued a comprehensive uranium management plan in December 2008 in response to a recommendation in our March 2008 report. In this plan, DOE stated that it would begin selling or re-enriching depleted uranium in 2009. However, to date, DOE has not done so and, according to DOE officials, has no current plans to sell or re- enrich this material. While selling the tails in their current unprocessed form is a potential option, we believe that DOE’s authority to conduct such sales is doubtful because of specific statutory language in legislation governing DOE’s disposition of its uranium. In 1996, Congress enacted section 3112 of the USEC Privatization Act, which limits DOE’s general authority, under the Atomic Energy Act or otherwise, to sell or transfer uranium. In particular, section 3112 explicitly bars DOE from selling or transferring “any uranium”—including but not specifically limited to certain forms of natural and enriched uranium—”except as consistent with this section.” Section 3112 then specifies conditions for DOE’s sale or transfer of natural and enriched uranium of various types, including conditions in section 3112(d) for sale of natural and low-enriched uranium from DOE’s inventory. To ensure the domestic uranium market is not flooded with large amounts of government material, in section 3112(d), Congress required DOE to determine that any such inventory sales will not have a material adverse impact on the domestic uranium industry. Congress also required in section 3112(d) that DOE determine it will receive adequate payment—at least “fair market value”—if it sells this uranium and that DOE obtain a determination from the President that such materials are not necessary for national security. However, neither section 3112(d) nor any other provision of section 3112 explicitly provides conditions for DOE to transfer or sell depleted uranium. Because section 3112(a) states that DOE may not “transfer or sell any uranium…except as consistent with this section,” and because no other part of section 3112 sets out the conditions for DOE to transfer or sell depleted uranium, we believe that under rules of statutory construction, DOE likely lacks authority to sell the tails. While courts have not addressed this question before and thus the outcome is not free from doubt, this interpretation applies the plain language of the statute. It also respects the policy considerations and choices Congress made in 1996 when presented with the disposition of DOE’s valuable uranium in a crowded and price-sensitive market. This reading of DOE’s authority is consistent with how courts address changes in circumstances after a law is passed: Statutes written in comprehensive terms apply to unanticipated circumstances if the new circumstances reasonably fall within the scope of the plain language. Thus, under the current terms of section 3112, DOE’s sale of its tails would be covered by the statute’s general prohibition on sale of uranium, even if tails were not part of the universe Congress explicitly had in mind when it enacted the statute in 1996. Should Congress grant DOE the needed legal authority by amending the USEC Privatization Act or through other legislation, firms such as nuclear power utilities and enrichment companies would be interested in purchasing at least that portion of the tails with higher concentrations of extractable uranium-235 as a valuable source for nuclear fuel. For example, our March 2008 report stated that officials from 8 of 10 U.S. nuclear utilities indicated tentative interest in such a purchase. Individual utilities were often interested in limited quantities of DOE’s tails because they were concerned about depending upon a single source to fulfill all of their uranium requirements. Multiple utilities acting together as a consortium could mitigate these concerns and purchase larger quantities of tails. The report also noted that some enrichment firms also told us of some interest in purchasing portions of the inventory, but their anticipated excess enrichment capacity to process the tails into a marketable form affected both the quantity of tails they would purchase and the timing of any purchase. Our March 2008 report noted that potential buyers suggested various commercial arrangements, including purchasing the tails through a competitive sale, such as an auction, or through negotiations with DOE. However, industry officials told us that buyers would discount, perhaps steeply, their offered prices to make buying tails attractive compared with purchasing natural uranium on the open market. That is, DOE might get a discounted price for the tails to compensate buyers for additional risks, such as rising enrichment costs or buyers’ inability to obtain sufficient enrichment services. In addition, potential buyers noted that any purchase would depend on confirming certain information, such as that the tails were free of contaminants that could cause nuclear fuel production problems and that the cylinders containing the tails—some of which are 50 years old and may not meet transportation standards—could be safely shipped. Although DOE’s legal authority to sell the tails in their current form is doubtful, DOE has the general legal option of re-enriching the tails and then selling the resulting natural or enriched uranium. DOE would have to contract for enrichment services commercially because the department no longer operates enrichment facilities itself. Furthermore, DOE would have to find a company with excess enrichment capacity beyond its current operations, which may be particularly difficult if large amounts of enrichment processing were required. Within the United States today, for example, there are only two operating enrichment facilities: DOE’s USEC- run Paducah, Kentucky, plant and the URENCO USA facility located near Eunice, New Mexico. In the case of the Paducah plant, almost all of its enrichment capacity is already being used through 2012, when the plant may stop operating. In the case of URENCO USA, the facility is still under construction and it is not yet operating at full capacity. Other companies are also constructing or planning to construct new enrichment facilities in the United States that potentially could be used to re-enrich DOE’s tails. Although DOE would have to pay for re-enrichment, it might obtain more value from selling the re-enriched uranium instead of the tails if its re- enrichment costs were less than the discount it would have to offer to sell the tails as is. Representatives of enrichment firms with whom we spoke at the time of our 2008 report told us they would be interested in re-enriching the tails for a fee. The quantity of tails they would re-enrich annually would depend on the available excess enrichment capacity at their facilities. Additionally, as noted above, prior to selling any natural or enriched uranium that results from re-enriching tails, DOE would be required under section 3112(d) of the USEC Privatization Act to determine that sale of the material would not have a material adverse impact on the domestic uranium industry and that the price paid to DOE would provide at least fair market value. Section 3112(d) also would require DOE to obtain the President’s determination that the material is not needed for national security. DOE Could Store the Tails DOE also has the general legal option to store the tails indefinitely. In the late 1990s, when relatively low uranium prices meant that tails were viewed as waste, DOE developed a plan for the safe, long-term storage of the material. DOE has constructed new facilities at its Paducah plant and its closed Portsmouth uranium enrichment plant to chemically convert its tails into a more stable and safer uranium compound that is suitable for long-term storage. The facilities are currently undergoing system checks and once they begin operating in 2011, DOE estimates it will take approximately 25 years to convert its existing tails inventory. As our March 2008 report noted, storing the tails indefinitely could prevent DOE from taking advantage of the large increase in uranium prices to obtain potentially large amounts of revenue from material that was once viewed as waste. DOE would also continue to incur costs associated with storing and maintaining the cylinders containing the tails. These costs amount to about $4 million annually. Sale (if authorized) or re-enrichment of some of DOE’s tails could also reduce the amount of tails that would need to be converted and, thereby, save DOE some conversion costs. Moreover, once the tails were converted into a more stable form of uranium oxide, DOE’s costs to re-enrich the tails would be higher if it later decided to pursue this approach. This is because of the cost of converting the uranium oxide back to uranium hexafluoride, a step that would be required for re-enrichment. However, according to DOE officials, after the conversion plants begin to operate, the plants would first convert DOE’s lower concentration tails because they most likely would not be economically worthwhile to re-enrich. This would give DOE additional time to sell or re-enrich the more valuable higher-concentration tails. Our March 2008 report noted that DOE had been developing a plan since 2005 to sell excess uranium from across its inventories of depleted, natural, and enriched uranium to generate revenues for the U.S. Treasury. In March 2008, DOE issued a policy statement that established a general framework for how DOE plans to manage its inventories. However, we noted that the March 2008 policy statement was not a comprehensive assessment of the sales, re-enrichment, or storage options for DOE’s tails. The policy statement lacked specific information on the types and quantities of uranium that the department has in its inventory. Furthermore, the policy statement did not discuss whether it would be more advantageous to sell the higher-concentration tails as is (if authorized) or to re-enrich them. It also did not contain details on when any sales or re-enrichment may occur or DOE’s legal authority to carry out those options under section 3112 of the USEC Privatization Act. It also lacked information on the uranium market conditions that would influence any DOE decision to potentially sell or re-enrich tails. Further, it did not analyze the impact of such a decision on the domestic uranium industry, and it did not provide guidance on how a decision should be altered in the event that market conditions change. Although the policy statement stated that DOE would identify categories of tails that have the greatest potential market value and that the department would conduct cost-benefit analyses to determine what circumstances would justify re-enriching and/or selling potentially valuable tails, it did not have specific milestones for doing so. Instead, the policy statement stated that this effort will occur “in the near future.” Our March 2008 report therefore recommended that DOE should complete the development of a comprehensive uranium management assessment as soon as possible. We stated that the assessment should contain detailed information on the types and quantities of depleted, natural, and enriched uranium the department currently manages and a comprehensive assessment of DOE’s options for this material, including the department’s authority to implement these options. Furthermore, we stated that the assessment should analyze the impact of each of these options on the domestic uranium industry and provide details on how implementation of any of these options should be adjusted in the event that market conditions change. In December 2008, DOE issued an “Excess Uranium Inventory Management Plan.” Among other things, the plan states that DOE would begin selling or re-enriching depleted uranium in 2009. However, the department has not, to date, sold or re-enriched any of its depleted uranium. According to DOE officials, the department currently has no plans to sell or re-enrich this material. At current uranium prices, we estimate DOE’s tails to have a net value of $4.2 billion; however, we would like to emphasize that this estimate is very sensitive to changing uranium prices, which recently have been extremely volatile, as well as to the availability of enrichment capacity. This estimate assumes the May 2011 published uranium price of $160 per kilogram of natural uranium in the form of uranium hexafluoride and $153 per separative work unit—the standard measure of uranium enrichment services. Our model also assumes the capacity to re-enrich the higher- concentration tails and subtracts the costs of the needed enrichment services. It also takes into account the cost savings DOE would realize from reductions in the amount of tails that needed conversion to a more stable form for storage, as well as the costs to convert any residual tails. As noted above, this estimate is very sensitive to price variations for uranium as well as to the availability of enrichment services. Uranium prices are very volatile, and a sharp rise or fall in prices could greatly affect the value of the tails. For example, our March 2008 report estimated the tails had a net value of $7.6 billion. This estimate was based on the February 2008 published uranium price of $200 per kilogram of natural uranium and $145 per separative work unit. Prices for uranium have since fallen from $200 per kilogram of natural uranium to $160 per kilogram. There is no consensus among industry players whether uranium prices will fall or rise in the future or on the magnitude of any future price changes. Furthermore, the introduction of additional uranium onto the market by the sale of large quantities of DOE depleted, natural, or enriched uranium—assuming DOE obtains authority to sell depleted uranium—could also lead to lower uranium prices. Therefore, according to DOE’s uranium management plan, DOE is limited to selling no more than 10 percent of the domestic demand for uranium annually. This is intended to help achieve DOE’s goal of minimizing the negative effects of DOE’s sales on domestic uranium producers. However, this limit lengthens the time necessary to market DOE’s uranium, increasing the time the department is exposed to uranium price volatility. These factors all result in great uncertainty of the valuation of DOE’s tails. In addition, the enrichment capacity available for re-enriching tails may be limited, and the costs of these enrichment services are uncertain. For example, at the time of our March 2008 report, USEC only had a small amount of excess enrichment capacity at its Paducah plant. If it used the spare capacity, USEC would only be able to re-enrich about 14 percent of DOE’s most economically attractive tails between now and the possible closing of the plant in 2012. Although USEC officials told us at the time of our March 2008 report that the company was willing to explore options to extend the Paducah plant’s operations beyond 2012 and dedicate Paducah’s capacity solely to re-enriching DOE’s tails after this point, negotiations between the company and DOE would be needed to determine the enrichment costs that would be paid by DOE. The Paducah plant uses a technology developed in the 1940s that results in relatively high production costs. Even if the Paducah plant were to be dedicated entirely to re-enriching DOE tails after 2012, over a decade would be required to complete the work because of limitations on the annual volume of tails that can be physically processed by the plant. This lengthy period of time would expose DOE to risks of uranium price fluctuations and increasing maintenance costs. USEC and other companies are constructing or planning to construct enrichment plants in the United States that utilize newer, lower-cost technology. However, these facilities are not expected to be completed until some time over the next decade. It is unclear exactly when these facilities would be fully operating, the extent to which they will have excess enrichment capacity to re-enrich DOE’s tails, and what enrichment costs DOE could expect to pay. For example, the size of the fee DOE may have to pay an enrichment company to re-enrich its tails would be subject to negotiation between DOE and the company. In summary, as was the case when we reported in March 2008, the U.S. government has an opportunity to gain some benefit from material that was once considered a liability. Under current law, however, one potential avenue for dealing with DOE’s depleted uranium tails—sale of the material in its current form—is likely closed to the department. Obtaining legal authority from Congress to sell depleted uranium under USEC Privatization Act section 3112 or other legislation would provide the department with an additional option in determining the best course of action to obtain the maximum financial benefit from its tails. Our March 2008 report therefore suggested that Congress consider clarifying DOE’s statutory authority to manage depleted uranium, under the USEC Privatization Act or other legislation, including explicit direction about whether and how DOE may sell or transfer the tails. Depending on the terms of such legislation, a sale of DOE’s tails could reap significant benefits for the government because of the potentially large amount of revenue that could be obtained. In any event, enacting explicit provisions regarding DOE’s disposition of depleted uranium would provide stakeholders with welcome legal clarity and help avoid litigation that could interrupt DOE’s efforts to obtain maximum value for the tails. DOE’s issuance of a comprehensive uranium management plan in December 2008 provided welcome clarity on the department’s plans for marketing its uranium. Unfortunately, DOE has failed to follow-through with the actions laid out in its plan. By not following its plan to sell or re- enrich some its tails beginning in 2009, DOE has increased uncertainty in the uranium market about its ultimate plans for its depleted uranium tails. In addition, DOE continues to be unable to quickly react to changing market conditions to achieve the greatest possible value from its uranium inventories. Chairman Whitfield, Ranking Member Rush, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. If you have any questions or need additional information, please contact Gene Aloise at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Major contributors to this statement were Ryan T. Coles (Assistant Director), Antoinette Capaccio, Karen Keegan, and Susan Sawtelle. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the 1940s, the Department of Energy (DOE) has been processing natural uranium into enriched uranium, which has a higher concentration of the isotope uranium-235 that can be used in nuclear weapons or reactors. This has resulted in over 700,000 metric tons of leftover depleted uranium, also known as "tails," that have varying residual concentrations of uranium-235. The tails are stored at DOE's uranium enrichment plants in Portsmouth, Ohio and Paducah, Kentucky. Although the tails have historically been considered a waste product, increases in uranium prices may give DOE options to use some of the tails in ways that could provide revenue to the government. GAO's testimony is based on its March 2008 report (GAO-08-606R). GAO updated the analysis in its 2008 report to reflect current uranium prices and actions taken by DOE. The testimony focuses on (1) DOE's options for its tails and (2) the potential value of DOE's tails and factors that affect the value. DOE's potential options for its tails include selling the tails "as is," re-enriching the tails, or storing them indefinitely. DOE's current legal authority to sell its depleted uranium inventory "as is" is doubtful, but DOE generally has authority to carry out the other options. (1) DOE's authority to sell the tails in their current unprocessed form is doubtful. Because of specific statutory language in 1996 legislation governing DOE's disposition of its uranium, DOE's authority to sell the tails in unprocessed form is doubtful, and under the rules of statutory construction, DOE likely lacks such authority. However, if Congress were to provide the department with the needed authority, firms such as nuclear power utilities and enrichment companies may be interested in purchasing these tails and re-enriching them as a source of nuclear fuel. (2) DOE could contract to re-enrich the tails. Although DOE would have to pay for re-enrichment, it might obtain more value from selling the re-enriched uranium instead of the tails if its re-enrichment costs were less than the discount it would have to offer to sell the tails as is. (3) DOE could store the tails indefinitely. This option conforms to an existing DOE plan to convert tails into a more stable form for long term storage, but storing the tails indefinitely could prevent DOE from obtaining the potentially large revenue resulting from sales at current high uranium prices. DOE issued a comprehensive uranium management plan in December 2008 that stated that the department would consider selling depleted uranium or re-enriching it to realize best value for the government and that it would begin selling or re-enriching depleted uranium in 2009. However, to date, DOE has not sold or re-enriched any of its depleted uranium and, according to DOE officials, has no current plans to do so. The potential value of DOE's depleted uranium tails is currently substantial, but changing market conditions could greatly affect the tails' value over time. Based on May 2011 uranium prices and enrichment costs and assuming sufficient re-enrichment capacity is available, GAO estimates the value of DOE's tails at $4.2 billion--about $3.4 billion less than GAO's March 2008 estimate. However, this estimate is very sensitive to changing uranium prices, which have dropped since GAO's March 2008 report was issued. GAO's estimate is also very sensitive to the availability of enrichment capacity. In particular, DOE would have to find a company with excess enrichment capacity beyond its current operations, which may be difficult if large amounts of enrichment processing were required. In its 2008 report, GAO suggested that Congress consider clarifying DOE's statutory authority to manage its tails. No action on this recommendation has been taken to date. Also, GAO recommended that DOE complete a comprehensive uranium management assessment. DOE issued a uranium management plan in December 2008 that addressed GAO's recommendation.
To respond to these questions, we interviewed agency and industry officials, reviewed documents, and consulted with biodefense experts. We conducted our review from June 2007 through August 2007 in accordance with generally accepted government auditing standards. The Project BioShield Act of 2004 (Public Law 108-276) was designed to encourage private companies to develop civilian medical countermeasures by guaranteeing a market for successfully developed countermeasures. The Project BioShield Act (1) relaxes some procedures for bioterrorism- related procurement, hiring, and research grant awarding; (2) allows for the emergency use of countermeasures not approved by FDA; and (3) authorizes 10-year funding (available through fiscal year 2013) to encourage the development and production of new countermeasures for chemical, biological, radiological, or nuclear agents. The act also authorizes HHS to procure these countermeasures for the Strategic National Stockpile. Project BioShield procurement involves actions by HHS (including ASPR, NIAID, FDA, and the Centers for Disease Control and Prevention (CDC)) and an interagency working group. Various offices within HHS fund the development research, procurement, and storage of medical countermeasures, including vaccines, for the Strategic National Stockpile. ASPR’s role: ASPR is responsible for the entire Project BioShield contracting process, including issuing requests for information and requests for proposals, awarding contracts, managing awarded contracts, and determining whether contractors have met the minimum requirements for payment. ASPR maintains a Web site detailing all Project BioShield solicitations and awards. ASPR has the primary responsibility for engaging with the industry and awarding contracts for large-scale manufacturing of licensable products, including vaccines, for delivery into the Strategic National Stockpile. With authorities recently granted, the Biomedical Advanced Research and Development Authority (BARDA) will be able to use a variety of funding mechanisms to support the advanced development of medical countermeasures and to award up to 50 percent of the contract as milestone payments before purchased products are delivered. NIAID’s role: NIAID is the lead agency in NIH for early candidate research and development of medical countermeasures for biodefense. NIAID issues grants and awards contracts for research on medical countermeasures exploration and early development, but it has no responsibility for taking research forward into marketable products. FDA’s role: Through its Center for Biologics Evaluation and Research (CBER), FDA licenses many biological products, including vaccines, and the facilities that produce them. Manufacturers are required to comply with current Good Manufacturing Practices regulations, which regulate personnel, buildings, equipment, production controls, records, and other aspects of the vaccine manufacturing process. FDA has also established the Office of Counterterrorism Policy and Planning in the Office of the Commissioner, which issued the draft Guidance on the Emergency Use Authorization of Medical Products in June 2005. This guidance describes in general terms the data that should be submitted to FDA, when available, for unapproved products or unapproved uses of approved products that HHS or another entity wishes FDA to consider for use in the event of a declared emergency. The final emergency use authorization (EUA) guidance was issued in July 2007. CDC’s role: Since 1999, CDC has had the major responsibility for managing and deploying the medical countermeasures—such as antibiotics and vaccines—stored in the Strategic National Stockpile. DOD is not currently a part of Project BioShield. Beginning in 1998, DOD had a program to vaccinate all military service members with BioThrax. DOD’s program prevaccinates personnel being deployed to Iraq, Afghanistan, and the Korean peninsula with BioThrax. For other deployments, this vaccination is voluntary. DOD also has a program to order, stockpile, and use the licensed anthrax vaccine. DOD estimates its needs for BioThrax doses and bases its purchases on that estimate. An FDA-licensed anthrax vaccine, BioThrax, has been available since 1970. The vaccine has been recommended for a variety of situations, for example, laboratory workers who produce anthrax cultures. The BioShield program stockpiled BioThrax for the Strategic National Stockpile for postexposure use in the event of a large number of U.S. civilians being exposed to anthrax. ASPR had already acquired 10 million doses of BioThrax from Emergent BioSolutions by 2006 and recently purchased an additional 10 million doses. Three major factors contributed to the failure of the first Project BioShield procurement effort. First, ASPR awarded the first BioShield procurement contract to VaxGen when its product was at a very early stage of development and many critical manufacturing issues had not been addressed. Second, VaxGen took unrealistic risks in accepting the contract terms. Third, key parties did not clearly articulate and understand critical requirements at the outset. ASPR’s decision to launch the VaxGen procurement contract for the rPA anthrax vaccine at an early stage of development, combined with the delivery requirement for 25 million doses within 2 years, did not take the complexity of vaccine development into consideration and was overly aggressive. Citing the urgency involved, ASPR awarded the procurement contract to VaxGen several years before the planned completion of earlier and uncompleted NIAID development contracts with VaxGen and thus preempted critical development work. NIAID awarded VaxGen two development contracts, neither of which was near completion when ASPR awarded the procurement contract. However, on November 4, 2004, a little more than a year after NIAID awarded VaxGen its second development contract, ASPR awarded the procurement contract to VaxGen for 75 million doses of its rPA anthrax vaccine. At that time, VaxGen was still at least a year away from completing the Phase 2 clinical trials under the second NIAID development contract. Moreover, VaxGen was still finishing up work on the original stability testing required under the first development contract. At the time of the award, ASPR officials had no objective criteria, such as Technology Readiness Levels (TRL), to assess product maturity. They were, however, optimistic that the procurement contract would be successful. One official described its chances of success at 80 percent to 90 percent. However, a key official at VaxGen told us at the same time that VaxGen estimated the chances of success at 10 percent to 15 percent. When we asked ASPR officials why they awarded the procurement contract when they did, they pointed to a sense of urgency at that time and the difficulties in deciding when to launch procurement contracts. According to industry experts, preempting the development contract 2 years before completing work—almost half its scheduled milestones—was questionable, especially for vaccine development work, which is known to be susceptible to technical issues even in late stages of development. NIAID officials also told us it was too early for a BioShield purchase. At a minimum, the time extensions for NIAID’s first development contract with VaxGen to accommodate stability testing should have indicated to ASPR that development on its candidate vaccine was far from complete. After ASPR awarded VaxGen the procurement contract, NIAID canceled several milestones under its development contracts undermining VaxGen’s ability to deliver the required number of doses within the 2-year time frame. VaxGen officials told us that they understood their chances for success were limited and that the contract terms posed significant risks. These risks arose from aggressive time lines, VaxGen’s limitations with regard to in-house technical expertise in stability and vaccine formulation—a condition exacerbated by the attrition of key staff from the company as the contract progressed—and its limited options for securing additional funding should the need arise. Industry experts told us that a 2-year time line to deliver 75 million filled and finished doses of a vaccine from a starting point just after phase 1 trials is a near-impossible task for any company. VaxGen officials told us that at the time of the procurement award they knew the probability of success was very low, but they were counting on ASPR’s willingness to be flexible with the contract time line and work with them to achieve success. In fact, in May 2006, ASPR did extend the contract deadlines to initiate delivery to the stockpile an additional 2 years. However, on November 3, 2006, FDA imposed a clinical hold on VaxGen’s forthcoming phase 2 trial after determining that data submitted by VaxGen were insufficient to ensure that the product would be stable enough to resume clinical testing. By that time, ASPR had lost faith in VaxGen’s technical ability to solve its stability problems in any reasonable time frame. When VaxGen failed to meet a critical performance milestone to initiate the next clinical trial, ASPR terminated the contract. According to VaxGen’s officials, throughout the two development contracts and the Project BioShield procurement contract, VaxGen’s staff peaked at only 120, and the company was consistently unable to marshal sufficient technical expertise. External expertise that might have helped VaxGen better understand its stability issue was never applied. At one point during the development contracts, NIAID—realizing VaxGen had a stability problem with its product—convened a panel of technical experts in Washington, D.C. NIAID officials told us that at the time of the panel meeting, they offered to fund technical experts to work with the company, but VaxGen opted not to accept the offer. Conversely, VaxGen officials reported to us that at the time NIAID convened the panel of experts, NIAID declined to fund the work recommended by the expert panel. Finally, VaxGen accepted the procurement contract terms even though the financial constraints imposed by the BioShield Act limited its options for securing any additional funding needed. In accordance with this act, payment was conditional on delivery of a product to the stockpile, and little provision could be made, contractually, to support any unanticipated or additional development needed—for example, to work through issues of stability or reformulation. Both problems are frequently encountered throughout the developmental life of a vaccine. This meant that the contractor would pay for any development work needed on the vaccine. VaxGen, as a small biotechnology company, had limited internal financial resources and was dependent on being able to attract investor capital for any major influx of funds. However VaxGen was willing to accept the firm, fixed-price contract and assume the risks involved. VaxGen did so even though it understood that development on its rPA vaccine was far from complete when the procurement contract was awarded and that the contract posed significant inherent risks. Important requirements regarding the data and testing required for the rPA anthrax vaccine to be eligible for use in an emergency were not known at the outset of the procurement contract. They were defined in 2005 when FDA introduced new general guidance on EUA. In addition, ASPR’s anticipated use of the rPA anthrax vaccine was not articulated to all parties clearly enough and evolved over time. Finally, according to VaxGen, purchases of BioThrax raised the requirement for use of the VaxGen rPA vaccine. All of these factors created confusion over the acceptance criteria for VaxGen’s product and significantly diminished VaxGen’s ability to meet contract time lines. After VaxGen received its procurement contract, draft guidance was issued that addressed the eventual use of any unlicensed product in the stockpile. This created confusion over the criteria against which VaxGen’s product would be evaluated, strained relations between the company and the government, and caused a considerable amount of turmoil within the company as it scrambled for additional resources to cover unplanned testing. In June 2005, FDA issued draft EUA guidance, which described for the first time the general criteria that FDA would use to determine the suitability of a product for use in an emergency. This was 7 months after the award of the procurement contract to VaxGen and 14 months after the due date for bids on that contract. Since the request for proposal for the procurement contract was issued and the award itself was made before the EUA guidance was issued, neither could take the EUA requirements into consideration. The procurement contract wording stated that in an emergency, the rPA anthrax vaccine was to be “administered under a ‘Contingency Use’ Investigational New Drug (IND) protocol” and that vaccine acceptance into the stockpile was dependent on the accumulation and submission of the appropriate data to support the “use of the product (under IND) in a postexposure situation.” However, FDA officials told us they do not use the phrase “contingency use” under IND protocols. When we asked ASPR officials about the requirements for use defined in the contract, they said that the contract specifications were consistent with the statute and the needs of the stockpile. They said their contract used “a term of art” for BioShield products. That is, the contractor had to deliver a “usable product” under FDA guidelines. The product could be delivered to the stockpile only if sufficient data were available to support emergency use. ASPR officials told us that FDA would define “sufficient data” and the testing hurdles a product needed to overcome to be considered a “usable product.” According to FDA, while VaxGen and FDA had monthly communication, data requirements for emergency use were not discussed until December 2005, when VaxGen asked FDA what data would be needed for emergency use. In January 2006, FDA informed VaxGen, under its recently issued draft EUA guidance, of the data FDA would require from VaxGen for its product to be eligible for consideration for use in an emergency. The draft guidance described in general FDA’s current thinking concerning what FDA considered sufficient data and the testing needed for a product to be considered for authorization in certain emergencies. Because the EUA guidance is intended to create a more feasible protocol for using an unapproved product in a mass emergency than the term “contingency use” under an IND protocol that ASPR used in the procurement contract, it may require more stringent data for safety and efficacy. Under an IND protocol, written, informed consent must be received before administering the vaccine to any person, and reporting requirements identical to those in a human clinical trial are required. The EUA guidance—as directed by the BioShield law—eased both informed consent and reporting requirements. This makes sense in view of the logistics of administering vaccine to millions of people in the large-scale, postexposure scenarios envisioned. Because EUA guidance defines a less stringent requirement for the government to use the product, it correspondingly may require more testing and clinical trial work than was anticipated under contingency use. Several of the agencies and companies involved in BioShield-related work have told us the EUA guidance appears to require a product to be further along the development path to licensure than the previous contingency use protocols would indicate. VaxGen officials told us that if the draft EUA guidance was the measure of success, then VaxGen estimated significant additional resources would be needed to complete testing to accommodate the expectations under this new guidance. NIAID told us that the EUA guidance described a product considerably closer to licensure (85 percent to 90 percent) than it had assumed for a Project BioShield medical countermeasure (30 percent) when it initially awarded the development contracts. FDA considers a vaccine’s concept of use important information to gauge the data and testing needed to ensure the product’s safety and efficacy. According to FDA, data and testing requirements to support a product’s use in an emergency context may vary depending on many factors, including the number of people to whom the product is expected to be administered. The current use of an unlicensed product involves assessing potential risks and benefits from using an unapproved drug in a very small number of people who are in a potentially life-threatening situation. In such situations, because of the very significant potential for benefit, safety and efficacy data needed to make the risk benefit assessment might be lower than in an emergency situation where an unlicensed vaccine might be offered to millions of healthy people. This distinction is critical for any manufacturer of a product intended for use in such scenarios—it defines the level of data and testing required. Product development plans and schedules rest on these requirements. However, in late 2005, as VaxGen was preparing for the second phase 2 trial and well into its period of performance under the procurement contract, it became clear that FDA and the other parties had different expectations for the next phase 2 trial. From FDA’s perspective, the purpose of phase 2 trials was to place the product and sponsor (VaxGen) in the best position possible to design and conduct a pivotal phase 3 trial in support of licensure and not to produce meaningful safety and efficacy data to support use of the vaccine in a contingency protocol under IND as expected by VaxGen, ASPR, and CDC. This lack of a clear understanding of the concept of use for VaxGen’s product caused FDA to delay replying to VaxGen until it could confer with ASPR and CDC to clarify this issue. Thus, we conclude that neither VaxGen nor FDA understood the rPA anthrax vaccine concept of use until this meeting. The introduction of BioThrax into the stockpile undermined the criticality of getting an rPA vaccine into the stockpile and, at least in VaxGen’s opinion, forced FDA to hold it to a higher standard that the company had neither the plans nor the resources to achieve. ASPR purchased 10 million doses of BioThrax in 2005 and 2006 as a stopgap measure for post- exposure situations. The EUA guidance states that FDA will “authorize” an unapproved or unlicensed product—such as the rPA anthrax vaccine candidate—only if “there is no adequate, approved and available alternative.” According to the minutes of the meeting between FDA and VaxGen, in January 2006, FDA reported that the unlicensed rPA anthrax vaccine would be used in an emergency after the stockpiled BioThrax, that is, “when all of the currently licensed had been deployed.” This diminished the likelihood of a scenario where the rPA vaccine might be expected to be used out of the stockpile and, in VaxGen’s opinion, raised the bar for its rPA vaccine. We identified two issues related to using the BioThrax in the Strategic National Stockpile. First, ASPR lacks an effective strategy to minimize waste. As a consequence, based on current inventory, over $100 million is likely to be wasted annually, beginning in 2008. Three lots of BioThrax vaccine in the stockpile have already expired, resulting in losses of over $12 million. According to the data provided by CDC, 28 lots of BioThrax vaccine will expire in calendar year 2008. ASPR paid approximately $123 million for these lots. For calendar year 2009, 25 additional lots—valued at about $106 million—will reach their expiration dates. ASPR could minimize the potential waste of these lots by developing a single inventory system with DOD—which uses large quantities of the BioThrax vaccine— with rotation based on a first-in, first-out principle. Because DOD is a high-volume user of the BioThrax vaccine, ASPR could arrange for DOD to draw vaccine from lots long before their expiration dates. These lots could then be replenished with fresh vaccine from the manufacturer. DOD, ASPR, industry experts, and Emergent BioSolutions (the manufacturer of BioThrax) agree that rotation on a first-in, first-out basis would minimize waste. DOD and ASPR officials told us that they discussed a rotation option in 2004 but identified several obstacles. In July 2007, DOD officials believed they might not be able to transfer funds to ASPR if DOD purchases BioThrax from ASPR. However, in response to our draft report, DOD informed us that funding is not an issue. However, ASPR continues to believe that the transfer of funds would be a problem. DOD stated smallpox vaccine (Dryvax) procurement from HHS is executed under such an arrangement. Further, DOD and ASPR officials told us that they use different authorities to indemnify the manufacturer against any losses or problems that may arise from use of the vaccine. According to DOD, this area may require legislative action to ensure that vaccine purchased by ASPR can be used in the DOD immunization program. Finally, since DOD vaccinates its troops at various locations around the world, there may be logistical distribution issues. A DOD official acknowledged that these issues could be resolved. Second, ASPR plans to use expired vaccine from the stockpile, which violates FDA’s current rules. Data provided by CDC indicated that two lots of BioThrax vaccine expired in December 2006 and one in January 2007. CDC officials stated that their policy is to dispose of expired lots since they cannot be used and continuing storage results in administrative costs. FDA rules prohibit the use of expired vaccine. Nevertheless, according to CDC officials, ASPR told CDC not to dispose of the three lots of expired BioThrax vaccine. ASPR officials told us that ASPR’s decision was based on the possible need to use these lots in an emergency. ASPR’s planned use of expired vaccine would violate FDA’s current rules and could undermine public confidence because ASPR would be unable to guarantee the potency of the vaccine. The termination of the first major procurement contract for rPA anthrax vaccine raised important questions regarding the approach taken to develop a new anthrax vaccine and a robust and sustainable biodefense medical countermeasure industry by bringing pharmaceutical and biotechnology firms to form a partnership with the government. With the termination of the contract, the government does not have a new, improved anthrax vaccine for the public, and the rest of the biotech industry is now questioning whether the government can clearly define its requirements for future procurement contracts. Since HHS components have not completed a formal lessons-learned exercise after terminating VaxGen’s development and procurement contracts, these components may repeat the same mistakes in the future in the absence of a corrective plan. Articulating concepts of use and all critical requirements clearly at the outset for all future medical countermeasures would help the HHS components involved in the anthrax procurement process to avoid past mistakes. If this is not done, the government risks the future interest and participation of the biotechnology industry. Given that the amount of money appropriated to procure medical countermeasures for the stockpile is limited, it is imperative that ASPR develop effective strategies to minimize waste. Since vaccines are perishable commodities that should not be used after their expiration dates, finding other users for the stockpile products before they expire would minimize waste. Because DOD requires a large amount of the BioThrax vaccine on an annual basis, it could use a significant portion of BioThrax in the stockpile before it expires. The report that we are issuing today makes three recommendations. To avoid repeating the mistakes that led to the failure of the first rPA procurement effort, we recommend that the Secretary of HHS direct ASPR, NIAID, FDA, and CDC to ensure that the concept of use and all critical requirements are clearly articulated at the outset for any future medical countermeasure procurement. To ensure public confidence and comply with FDA’s current rules, we recommend that the Secretary of HHS direct ASPR to destroy the expired BioThrax vaccine in the stockpile. To minimize waste of the BioThrax vaccine in the stockpile, we recommend that the Secretaries of HHS and DOD develop a single integrated inventory system for the licensed anthrax vaccine, with rotation based on a first-in, first-out principle. HHS and DOD generally concurred with our recommendations. In addition, with regard to our recommendation on integrated stockpile, they identified legal challenges to developing an integrated inventory system for BioThrax in the stockpile, which may require legislative action. Although HHS and DOD use different authorities to address BioThrax liability issues, both authorities could apply to either DOD or HHS; consequently, indemnity does not appear to be an insurmountable obstacle for future procurements. Mr. Chairman, this concludes my remarks. I will be happy to answer any questions you or other members may have. For questions regarding this testimony, please contact Keith Rhodes at (202) 512-6412 or rhodesk@gao.gov. GAO staff making major contributions to this testimony included Noah Bleicher, William Carrigg, Barbara Chapman, Crystal Jones, Jeff McDermott, Linda Sellevaag, Sushil Sharma, and Elaine Vaurio. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The anthrax attacks in September and October 2001 highlighted the need to develop medical countermeasures. The Project BioShield Act of 2004 authorized the Department of Health and Human Services (HHS) to procure countermeasures for a Strategic National Stockpile. However, in December 2006, HHS terminated the contract for a recombinant protective antigen (rPA) anthrax vaccine because VaxGen failed to meet a critical contractual milestone. Also, supplies of the licensed BioThrax anthrax vaccine already in the stockpile will start expiring in 2008. GAO was asked to testify on its report on Project BioShield, which is being released today. This testimony summarizes (1) factors contributing to the failure of the rPA vaccine contract and (2) issues associated with using the BioThrax in the stockpile. GAO interviewed agency and industry officials, reviewed documents, and consulted with biodefense experts. Three major factors contributed to the failure of the first Project BioShield procurement effort for an rPA anthrax vaccine. First, HHS's Office of the Assistant Secretary for Preparedness and Response (ASPR) awarded the procurement contract to VaxGen, a small biotechnology firm, while VaxGen was still in the early stages of developing a vaccine and had not addressed many critical manufacturing issues. This award preempted critical development work on the vaccine. Also, the contract required VaxGen to deliver 25 million doses of the vaccine in 2 years, which would have been unrealistic even for a larger manufacturer. Second, VaxGen took unrealistic risks in accepting the contract terms. VaxGen officials told GAO that they accepted the contract despite significant risks due to (1) the aggressive delivery time line for the vaccine, (2) VaxGen's lack of in-house technical expertise--a condition exacerbated by the attrition of key company staff as the contract progressed--and (3) VaxGen's limited options for securing any additional funding needed. Third, important Food and Drug Administration (FDA) requirements regarding the type of data and testing required for the rPA anthrax vaccine to be eligible for use in an emergency were not known at the outset of the procurement contract. In addition, ASPR's anticipated use of the rPA anthrax vaccine was not articulated to all parties clearly enough and evolved over time. Finally, according to VaxGen, the purchase of BioThrax for the stockpile as a stopgap measure raised the bar for the VaxGen vaccine. All these factors created confusion over the acceptance criteria for VaxGen's product and significantly diminished VaxGen's ability to meet contract time lines. ASPR has announced its intention to issue another request for proposal for an rPA anthrax vaccine procurement but, along with other HHS components, has not analyzed lessons learned from the first contract's failure and may repeat earlier mistakes. According to industry experts, the lack of specific requirements is a cause of concern to the biotechnology companies that have invested significant resources in trying to meet government needs and now question whether the government can clearly define future procurement contract requirements. GAO identified two issues related with the use of the BioThrax in the Strategic National Stockpile. First, ASPR lacks an effective strategy to minimize the waste of BioThrax. Starting in 2008, several lots of BioThrax in the Strategic National Stockpile will begin to expire. As a result, over $100 million per year could be lost for the life of the vaccine currently in the stockpile. ASPR could minimize such potential waste by developing a single inventory system with DOD--a high-volume user of BioThrax--with rotation based on a first-in, first-out principle. DOD and ASPR officials identified a number of obstacles to this type of rotation that may require legislative action. Second, ASPR planned to use three lots of expired BioThrax vaccine in the stockpile in the event of an emergency. This would violate FDA rules, which prohibit using an expired vaccine, and could also undermine public confidence because the vaccine's potency could not be guaranteed.
FDA may order a postapproval study for a device at the time FDA approves that device for marketing through its premarket approval (PMA) process or its humanitarian device exemption (HDE) process (for devices that treat rare diseases or conditions).the length of a postapproval study, but according to FDA guidance, the There are no statutory limits on device manufacturer and FDA agree on the study plan, which includes a study design (e.g., randomized clinical trial or other study design), the study’s data source, and time frame for when the manufacturer will complete required reports. In contrast, FDA may order a postmarket surveillance study at the time of approval or clearance for certain devices or any time thereafter as long as certain criteria are met. (See table 1.) FDA may order a postmarket surveillance study not only for PMA and HDE devices, but also for devices that are cleared through the less stringent 510(k) premarket notification process—also known as the 510(k) process. FDA may order postmarket surveillance studies if failure of the device would be reasonably likely to have serious adverse health consequences, and such studies may be ordered when FDA officials identify an issue with a device FDA is through adverse advent reports or reviews of scientific literature.authorized to order postmarket surveillance studies for a duration of up to 36 months, but the time frame may be extended if the manufacturer and FDA are in agreement. Additionally, FDA may order a study with a longer duration if the device is expected to have significant use in pediatric populations and an extended period is necessary to assess issues like the impact of the device on children’s growth or development. Manufacturers must periodically report to FDA information on these postmarket studies such as the progress of the study. Table 2 describes the various status categories that apply to postmarket studies. Cardiovascular devices, such as stents and heart valves, accounted for 56 percent of the 313 postapproval studies ordered from January 1, 2007, through February 23, 2015. Orthopedic and general and plastic surgery devices were the second and third most common subjects of postapproval studies, respectively. FDA also ordered postapproval studies for another 11 medical specialties, which are included in the other category. (See table 3.) The number of postapproval studies for cardiovascular devices varied from year to year, with the most cardiovascular device studies ordered in 2008 and 2012. (See fig. 1.) In general, FDA orders a postapproval study to obtain specific information on the postmarket performance of or experience with an approved device. For example, the increase in the number of postapproval studies ordered for cardiovascular devices in 2008 reflects that FDA has required that each new implantable cardioverter defibrillator lead undergo a postapproval study. Other includes devices for ophthalmic (e.g., intraocular lens); obstetrics and gynecology (e.g., permanent birth control system); gastroenterology-urology (e.g., gastric banding system); anesthesiology (e.g., computer assisted personalized sedation system); clinical chemistry (e.g., artificial pancreas device system); dental (e.g., bone grafting material); ear, nose, and throat (e.g., implantable hearing system); general hospital (e.g., infusion pump); microbiology (e.g., human papillomavirus test); neurology (e.g., intracranial aneurysm flow diverter); and pathology (e.g., breast cancer detection test). The PMA process is the more stringent of FDA’s premarket review processes and requires the manufacturer to supply evidence providing reasonable assurance that the device is safe and effective before the device is legally available on the U.S. market. February 23, 2015, were for devices approved through the PMA process. In terms of study design, more than two-thirds (69 percent) of the 313 postapproval studies ordered during the timeframe we examined were prospective cohort studies—that is, studies in which a group using a particular device was compared to a second group not using that device, over a long period of time. (See fig. 2.) For example, one postapproval prospective cohort study was designed to follow patients who received a certain type of breast implant over a 10-year period and to collect information on complications as they occur. Additionally, postapproval studies were conducted using a variety of data sources, including newly collected data and medical device registries. Nearly two-thirds (196 studies) of the postapproval studies we examined relied upon new data collected by the manufacturer; and about one-third (98 studies) used data collected from registries—that is, a data system to collect and maintain structured records on devices for a specified time frame and population.maintained by the manufacturer or another organization, such as a (See table 4.) Registries may be created and medical specialty’s professional association. For example, FDA has established a National Medical Device Registry Task Force to further examine the implementation of registries in postmarket surveillance. According to FDA, registries play a unique role in the postmarket surveillance of medical devices because they can provide additional detailed information about patients, procedures, and devices. For example, registries can help assess device performance by collecting information on patients with similar medical conditions. About 72 percent of the postapproval studies we examined (or 225 of the 313 studies ordered) were categorized as ongoing as of February 2015. An additional 20 percent were completed and the remaining 8 percent were inactive. (See fig. 3.) Further analysis of FDA data on the 225 ongoing postapproval studies showed 81 percent (or 182 studies) to be progressing adequately, while the remaining 19 percent (43 studies) were delayed as of February 2015. The 182 ongoing postapproval studies considered to be progressing adequately—that is, the study was pending, the protocol or plan was pending, or progress was adequate—had been ongoing for an average of 37 months, or a little over 3 years. Similarly, the 43 ongoing postapproval studies considered to be delayed—that is protocol/plan overdue, or progress inadequate—had been ongoing for an average of 39 months or a little over 3 years. Delayed studies include studies for which FDA had not approved a study plan within 6 months of the PMA approval date (3 studies) or studies which had begun, but had not progressed as intended (40 studies). According to FDA officials, a key reason for a study’s delay may be limited patient enrollment into the postapproval study. FDA officials said they work with manufacturers to address manufacturers’ inability to enroll patients, in part, by suggesting different strategies to improve enrollment, such as hiring a dedicated person for recruitment or reducing the cost of the study device to make it competitive with conventional treatments. Twenty percent (or 62 studies) of the 313 postapproval studies were categorized as completed as of February 23, 2015—that is, FDA determined that the manufacturer had fulfilled the study order and had closed the study. As table 5 shows, on average, these completed postapproval studies took about 36 months, or 3 years, with the longest study taking almost 7 years. The remaining 8 percent (or 26 studies) were categorized as inactive. Postapproval studies that are considered inactive include studies that, for example, involve a device that is no longer being marketed or the study’s research questions are no longer relevant. FDA ordered 392 postmarket surveillance studies, half of which (196 studies) were for orthopedic medical devices, from May 1, 2008, through February 24, 2015. In 2011 alone, FDA ordered 176 studies for orthopedic devices following safety concerns about metal-on-metal hip implants, including potential bone or tissue damage from metal particles. (See fig. 4.) An additional 40 percent (or 158 studies) of the postmarket surveillance studies FDA ordered were for devices used in general and plastic surgery and obstetrics and gynecology procedures. FDA ordered 121 postmarket surveillance studies for devices in these medical specialties in 2012, following safety concerns about the use of implanted surgical mesh used for urogynecologic procedures, such as severe pain. About 10 percent of the postmarket surveillance studies were for devices in other medical specialties. Other includes general hospital (e.g., intravascular administration set), cardiovascular (e.g., vena cava filter), dental (e.g., temporomandibular joint implant), immunology, neurology, ophthalmic, and physical medicine devices. Between May 1, 2008, and February 24, 2015, about 94 percent of the postmarket surveillance studies ordered were for devices cleared through the 510(k) premarket notification process.regarding metal-on-metal implants and implantable surgical mesh used for urogynecologic procedures that arose after the devices were cleared through the 510(k) process, according to FDA officials. About 88 percent of the postmarket surveillance studies we examined (or 344 out of 392 studies) were categorized as inactive. (See fig. 5.) A study might be categorized as inactive, for example, because it had been consolidated, meaning that a manufacturer was able to combine an order for a postmarket surveillance study with other related study orders into a single study. For example, if FDA issued 22 orders for postmarket surveillance studies for different models of metal-on-metal implants from a single manufacturer, the manufacturer could combine all of the orders into a single study covering all of the devices, and the other 21 orders for postmarket surveillance studies would be categorized as consolidated and considered inactive. About 31 percent (or 108 studies) were inactive because they had been consolidated into another study. Another 31 percent of the inactive studies (or 107 studies) were categorized by FDA as either terminated, meaning the study was no longer relevant because, for example, the manufacturer changed the indication for use that was the subject of the postmarket surveillance study, or withdrawn by FDA because the manufacturer demonstrated the objective of the study using publicly available data and FDA agreed with the results. The remaining 38 percent (or 129 studies) were categorized as other—that is, the status does not fit in another category because, for example the device is no longer being marketed. However, according to FDA officials, if the manufacturer does begin marketing the device again, then it will have to conduct the study. The inactive category for postmarket surveillance studies includes studies with one of four FDA study statuses: (1) other—that is, the study status does not fit another category, because, for example, the device is no longer being marketed or is being redesigned; (2) consolidated—that is, the study was one of many postmarket surveillance studies ordered and the manufacturer, with the approval of FDA, consolidated these multiple studies into a single study; (3) terminated—that is, studies that were terminated by FDA because they were no longer relevant (e.g., the manufacturer changed the indication for use that was the subject of the postmarket surveillance study); or (4) withdrawn—that is, studies that were withdrawn because the manufacturer demonstrated the objective of the study using publicly available data and FDA agreed with the results. While 88 percent of the postmarket surveillance studies in our analysis were inactive, the remaining 12 percent (or 48 studies) were either still ongoing or completed as of February 24, 2015. Specifically, 10 percent (or 40 studies) were categorized as ongoing, while 2 percent (or 8 studies) were completed. Of the 40 ongoing postmarket surveillance studies, more than half were progressing adequately, while the rest were delayed. Further analysis showed the following: The 21 ongoing postmarket surveillance studies that FDA considered to be progressing adequately had been ongoing for an average of 33 months, or about 2.7 years. (See table 6.) The 19 ongoing postmarket surveillance studies that FDA considered to be delayed had been ongoing for an average of 49 months or about 4 years. Delayed studies included studies for which FDA had not approved a study plan within 6 months of ordering the study or studies that had begun but were not progressing as intended. According to FDA, postmarket surveillance studies may be delayed for reasons similar to postapproval studies, such as difficulty enrolling patients into the study. Regarding the eight completed postmarket surveillance studies, the average length of time to complete the study—that is, the time from the study order to the date FDA determined that the manufacturer had fulfilled the study order and had closed the study—was about 29 months or 2.4 years. FDA generally may order a manufacturer to conduct a postmarket surveillance study for up to 36 months unless the manufacturer and FDA agree to an extended time frame. We provided a draft of this report to the Secretary of Health and Human Services. HHS provided technical comments that were incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix I. In addition to the contact named above, Kim Yamane, Assistant Director; Britt Carlson; Carolyn Fitzgerald; Sandra George; Cathleen Hamann; and Gay Hee Lee were major contributors to this report.
Americans depend on FDA—an agency within the Department of Health and Human Services (HHS)—to oversee the safety and effectiveness of medical devices sold in the United States. FDA's responsibilities begin before a new device is brought to market and continue after a device is on the market. As part of its postmarket efforts, FDA may order manufacturers to conduct two types of studies: (1) postapproval studies, ordered at the time of device approval, and (2) postmarket surveillance studies, generally ordered after a device is on the market. GAO was asked to report on the characteristics and status of postmarket studies. This report describes (1) the types of devices for which FDA has ordered a postapproval study and the status of these studies, and (2) the types of devices for which FDA has ordered a postmarket surveillance study and the status of these studies. GAO analyzed FDA data—including data on medical specialty and study status as of February 2015—for (1) postapproval studies ordered from January 1, 2007, through February 23, 2015, and (2) postmarket surveillance studies ordered from May 1, 2008, through February 24, 2015. These represent the time periods for which FDA reported consistently tracking study data. GAO also reviewed documents, such as FDA guidance, and interviewed FDA officials. HHS provided technical comments that were incorporated, as appropriate. Fifty-six percent of the 313 medical device postapproval studies—studies that are ordered at the time of device approval—the Food and Drug Administration (FDA) ordered from January 1, 2007, through February 23, 2015, were for cardiovascular devices and most were making adequate progress. Postapproval studies are ordered to obtain additional information not available before devices are marketed, such as a device's performance over the course of long-term use. In terms of study design, 69 percent of the 313 postapproval studies ordered were prospective cohort studies—that is, studies in which a group using a particular device was compared to a second group not using that device, over a long period of time. Most (72 percent) of the postapproval studies were ongoing as of February 2015, 20 percent of studies were completed, and 8 percent were inactive because, for example, the device is no longer marketed. Ongoing postapproval studies that GAO reviewed had been ongoing for an average of a little more than 3 years; FDA considered most of them (182 studies) to be progressing adequately and the rest (43 studies) to have inadequate progress or to otherwise be delayed. According to FDA officials, a key reason for a study's delay may be limited patient enrollment in the postapproval study. On average, manufacturers completed postapproval studies in about 3 years, with the longest study taking almost 7 years, for the studies that GAO reviewed. Ninety percent of the 392 medical device postmarket surveillance studies FDA ordered from May 1, 2008, through February 24, 2015, were for orthopedic devices and devices such as certain kinds of implantable surgical mesh following concerns with these types of devices, and many were consolidated into ongoing studies. Unlike postapproval studies, FDA may order postmarket surveillance studies at the time or after a device is approved or cleared for marketing—for example, if FDA becomes aware of a potential safety issue. Safety concerns about metal-on-metal hip implants, including potential bone and tissue damage from metal particles, led to an increase in such studies ordered in 2011. Forty percent of the 392 ordered studies were for implanted surgical mesh and other devices used in general and plastic surgery and obstetrics and gynecology procedures. FDA ordered most of these studies in 2012, following safety concerns associated with implanted surgical mesh, such as severe pain. Eighty-eight percent of the postmarket surveillance studies GAO analyzed were inactive as of February 2015. Inactive studies include those that were consolidated (108 studies), meaning that a manufacturer was able to combine an order for a postmarket surveillance study with other related study orders into a single study, such as combining studies of multiple device models into a single study; and those that were inactive for other reasons, such as if the order was for a device that is no longer marketed. The remaining 12 percent of the postmarket surveillance studies were either still ongoing (40 studies) or completed (8 studies). Of the 40 ongoing studies, more than half were progressing adequately, according to FDA, and had been ongoing for an average of a little less than 3 years; the rest were delayed and had been ongoing for an average of about 4 years as of February 2015. According to FDA, postmarket surveillance studies may be delayed for reasons similar to postapproval studies, such as difficulty enrolling patients into the study.
As a result of 150 years of changes in financial regulation in the United States, the regulatory system has become complex and fragmented. Today, responsibilities for overseeing the financial services industry are shared among almost a dozen federal banking, securities, futures, and other regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies. In particular, five federal agencies— including the Federal Deposit Insurance Corporation, the Federal Reserve, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, and the National Credit Union Administration—and multiple state agencies oversee depository institutions. Securities activities are overseen by the Securities and Exchange Commission and state government entities, as well as by private sector organizations performing self-regulatory functions. Futures trading is overseen by the Commodity Futures Trading Commission and also by industry self-regulatory organizations. Insurance activities are primarily regulated at the state level with little federal involvement. Other federal regulators also play important roles in the financial regulatory system, such as the Public Company Accounting Oversight Board, which oversees the activities of public accounting firms, and the Federal Trade Commission, which acts as the primary federal agency responsible for enforcing compliance with federal consumer protection laws for financial institutions, such as finance companies, which are not overseen by another financial regulator. Much of this structure has developed as the result of statutory and regulatory changes that were often implemented in response to financial crises or significant developments in the financial services sector. For example, the Federal Reserve System was created in 1913 in response to financial panics and instability around the turn of the century, and much of the remaining structure for bank and securities regulation was created as the result of the Great Depression turmoil of the 1920s and 1930s. Changes in the types of financial activities permitted for depository institutions and their affiliates have also shaped the financial regulatory system over time. For example, under the Glass-Steagall provisions of the Banking Act of 1933, financial institutions were prohibited from simultaneously offering commercial and investment banking services, but with the passage of the Gramm-Leach-Bliley Act of 1999 (GLBA), Congress permitted financial institutions to fully engage in both types of activities. Several key developments in financial markets and products in the past few decades have significantly challenged the existing financial regulatory structure. (See fig. 1.) First, the last 30 years have seen waves of mergers among financial institutions within and across sectors, such that the United States, while still having large numbers of financial institutions, also has several very large globally active financial conglomerates that engage in a wide range of activities that have become increasingly interconnected. Regulators have struggled, and often failed, to mitigate the systemic risks posed by these conglomerates, and to ensure they adequately manage their risks. The portion of firms that conduct activities across the financial sectors of banking, securities, and insurance increased significantly in recent years, but none of the regulators is tasked with assessing the risks posed across the entire financial system. A second dramatic development in U.S. financial markets in recent decades has been the increasingly critical roles played by less-regulated entities. In the past, consumers of financial products generally dealt with entities such as banks, broker-dealers, and insurance companies that were regulated by a federal or state regulator. However, in the last few decades, various entities—nonbank lenders, hedge funds, credit rating agencies, and special-purpose investment entities—that are not always subject to full regulation by such authorities have become important participants in our financial services markets. These unregulated or less regulated entities can sometimes provide substantial benefits by supplying information or allowing financial institutions to better meet demands of consumers, investors or shareholders, but pose challenges to regulators that do not fully or cannot oversee their activities. For example, significant participation in the subprime mortgage market by generally less-regulated nonbank lenders contributed to a dramatic loosening in underwriting standards leading up to the current financial crisis. A third development that has revealed limitations in the current regulatory structure has been the proliferation of more complex financial products. In particular, the increasing prevalence of new and more complex investment products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Regulators failed to adequately oversee the sale of mortgage products that posed risks to consumers and the stability of the financial system. Fourth, standard setters for accounting and financial regulators have faced growing challenges in ensuring that accounting and audit standards appropriately respond to financial market developments, and in addressing challenges arising from the global convergence of accounting and auditing standards. Finally, with the increasingly global aspects of financial markets, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. For example, the current system has complicated the ability of financial regulators to convey a single U.S. position in international discussions, such the Basel Accords process for developing international capital standards, and international officials have also indicated that the lack of a single point of contact on, for example, insurance issues has complicated regulatory decision making. As a result of significant market developments in recent decades that have outpaced a fragmented and outdated regulatory structure, significant reforms to the U.S. regulatory system are critically and urgently needed. The current system has important weaknesses that, if not addressed, will continue to expose the nation’s financial system to serious risks. As early as 1994, we identified the need to examine the federal financial regulatory structure, including the need to address the risks from new unregulated products. Since then, we have described various options for Congress to consider, each of which provides potential improvements, as well as some risks and potential costs. Our report offers a framework for crafting and evaluating regulatory reform proposals; it consists of the following nine characteristics that should be reflected in any new regulatory system. By applying the elements of this framework, the relative strengths and weaknesses of any reform proposal should be better revealed, and policymakers should be able to focus on identifying trade-offs and balancing competing goals. Similarly, the framework could be used to craft proposals, or to identify aspects to be added to existing proposals to make them more effective and appropriate for addressing the limitations of the current system. 1. Clearly defined regulatory goals. A regulatory system should have goals that are clearly articulated and relevant, so that regulators can effectively conduct activities to implement their missions. A critical first step to modernizing the regulatory system and enhancing its ability to meet the challenges of a dynamic financial services industry is to clearly define regulatory goals and objectives. In the background of our report, we identified four broad goals of financial regulation that regulators have generally sought to achieve. These include ensuring adequate consumer protections, ensuring the integrity and fairness of markets, monitoring the safety and soundness of institutions, and acting to ensure the stability of the overall financial system. However, these goals are not always explicitly set in the federal statutes and regulations that govern these regulators. Having specific goals clearly articulated in legislation could serve to better focus regulators on achieving their missions with greater certainty and purpose, and provide continuity over time. Given some of the key changes in financial markets discussed in our report—particularly the increased interconnectedness of institutions, the increased complexity of products, and the increasingly global nature of financial markets—Congress should consider the benefits that may result from re-examining the goals of financial regulation and making explicit a set of comprehensive and cohesive goals that reflect today’s environment. For example, it may be beneficial to have a clearer focus on ensuring that products are not sold with unsuitable, unfair, deceptive, or abusive features; that systemic risks and the stability of the overall financial system are specifically addressed; or that U.S. firms are competitive in a global environment. This may be especially important given the history of financial regulation and the ad hoc approach through which the existing goals have been established. We found varying views about the goals of regulation and how they should be prioritized. For example, representatives of some regulatory agencies and industry groups emphasized the importance of creating a competitive financial system, whereas members of one consumer advocacy group noted that reforms should focus on improving regulatory effectiveness rather than addressing concerns about market competitiveness. In addition, as the Federal Reserve notes, financial regulatory goals often will prove interdependent and at other times may conflict. Revisiting the goals of financial regulation would also help ensure that all involved entities—legislators, regulators, institutions, and consumers—are able to work jointly to meet the intended goals of financial regulation. Such goals and objectives could help establish agency priorities and define responsibility and accountability for identifying risks, including those that cross markets and industries. Policymakers should also carefully define jurisdictional lines and weigh the advantages and disadvantages of having overlapping authorities. While ensuring that the primary goals of financial regulation—including system soundness, market integrity, and consumer protection—are better articulated for regulators, policymakers will also have to ensure that regulation is balanced with other national goals, including facilitating capital raising, innovation, and other benefits that foster long-term growth, stability, and welfare of the United States. Once these goals are agreed upon, policymakers will need to determine the extent to which goals need to be clarified and specified through rules and requirements, or whether to avoid such specificity and provide regulators with greater flexibility in interpreting such goals. Some reform proposals suggest “principles-based regulation” in which regulators apply broad-based regulatory principles on a case-by-case basis. Such an approach offers the potential advantage of allowing regulators to better adapt to changing market developments. Proponents also note that such an approach would prevent institutions in a more rules-based system from complying with the exact letter of the law while still engaging in unsound or otherwise undesirable financial activities. However, such an approach has potential limitations. Opponents note that regulators may face challenges to implement such a subjective set of principles. A lack of clear rules about activities could lead to litigation if financial institutions and consumers alike disagree with how regulators interpreted goals. Opponents of principles-based regulation note that industry participants who support such an approach have also in many cases advocated for bright-line standards and increased clarity in regulation, which may be counter to a principles-based system. The most effective approach may involve both a set of broad underlying principles and some clear technical rules prohibiting specific activities that have been identified as problematic. Key issues to be addressed: Clarify and update the goals of financial regulation and provide sufficient information on how potentially conflicting goals might be prioritized. Determine the appropriate balance of broad principles and specific rules that will result in the most effective and flexible implementation of regulatory goals. 2. Appropriately comprehensive. A regulatory system should ensure that financial institutions and activities are regulated in a way that ensures regulatory goals are fully met. As such, activities that pose risks to consumer protection, financial stability, or other goals should be comprehensively regulated, while recognizing that not all activities will require the same level of regulation. A financial regulatory system should effectively meet the goals of financial regulation, as articulated as part of this process, in a way that is appropriately comprehensive. In doing so, policymakers may want to consider how to ensure that both the breadth and depth of regulation are appropriate and adequate. That is, policymakers and regulators should consider how to make determinations about which activities and products, both new and existing, require some aspect of regulatory involvement to meet regulatory goals, and then make determinations about how extensive such regulation should be. As we noted in our report, gaps in the current level of federal oversight of mortgage lenders, credit rating agencies, and certain complex financial products such as CDOs and credit default swaps likely have contributed to the current crisis. Congress and regulators may also want to revisit the extent of regulation for entities such as banks that have traditionally fallen within full federal oversight but for which existing regulatory efforts, such as oversight related to risk management and lending standards, have been proven in some cases inadequate by recent events. However, overly restrictive regulation can stifle the financial sectors’ ability to innovate and stimulate capital formation and economic growth. Regulators have struggled to balance these competing objectives, and the current crisis appears to reveal that the proper balance was not in place in the regulatory system to date. Key issues to be addressed: Identify risk-based criteria, such as a product’s or institution’s potential to harm consumers or create systemic problems, for determining the appropriate level of oversight for financial activities and institutions. Identify ways that regulation can provide protection but avoid hampering innovation, capital formation, and economic growth. 3. Systemwide focus. A regulatory system should include a mechanism for identifying, monitoring, and managing risks to the financial system regardless of the source of the risk or the institutions in which it is created. A regulatory system should focus on risks to the financial system, not just institutions. As noted in our report, with multiple regulators primarily responsible for individual institutions or markets, none of the financial regulators is tasked with assessing the risks posed across the entire financial system by a few institutions or by the collective activities of the industry. The collective activities of a number of entities—including mortgage brokers, real estate professionals, lenders, borrowers, securities underwriters, investors, rating agencies and others—likely all contributed to the recent market crisis, but no one regulator had the necessary scope of oversight to identify the risks to the broader financial system. Similarly, once firms began to fail and the full extent of the financial crisis began to become clear, no formal mechanism existed to monitor market trends and potentially stop or help mitigate the fallout from these events. Having a single entity responsible for assessing threats to the overall financial system could prevent some of the crises that we have seen in the past. For example, in its Blueprint for a Modernized Financial Regulatory Structure, Treasury proposed expanding the responsibilities of the Federal Reserve to create a “market stability regulator” that would have broad authority to gather and disclose appropriate information, collaborate with other regulators on rulemaking, and take corrective action as necessary in the interest of overall financial market stability. Such a regulator could assess the systemic risks that arise at financial institutions, within specific financial sectors, across the nation, and globally. However, policymakers should consider that a potential disadvantage of providing the agency with such broad responsibility for overseeing nonbank entities could be that it may imply an official government support or endorsement, such as a government guarantee, of such activities, and thus encourage greater risk taking by these financial institutions and investors. Regardless of whether a new regulator is created, all regulators under a new system should consider how their activities could better identify and address systemic risks posed by their institutions. As the Federal Reserve Chairman has noted, regulation and supervision of financial institutions is a critical tool for limiting systemic risk. This will require broadening the focus from individual safety and soundness of institutions to a systemwide oversight approach that includes potential systemic risks and weaknesses. A systemwide focus should also increase attention on how the incentives and constraints created by regulations affects risk taking throughout the business cycle, and what actions regulators can take to anticipate and mitigate such risks. However, as the Federal Reserve Chairman has noted, the more comprehensive the approach, the more technically demanding and costly it would be for regulators and affected institutions. Key issues to be addressed: Identify approaches to broaden the focus of individual regulators or establish new regulatory mechanisms for identifying and acting on systemic risks. Determine what additional authorities a regulator or regulators should have to monitor and act to reduce systemic risks. 4. Flexible and adaptable. A regulatory system should be adaptable and forward-looking such that regulators can readily adapt to market innovations and changes and include a mechanism for evaluating potential new risks to the system. A regulatory system should be designed such that regulators can readily adapt to market innovations and changes and include a formal mechanism for evaluating the full potential range of risks of new products and services to the system, market participants, and customers. An effective system could include a mechanism for monitoring market developments— such as broad market changes that introduce systemic risk, or new products and services that may pose more confined risks to particular market segments—to determine the degree, if any, to which regulatory intervention might be required. The rise of a very large market for credit derivatives, while providing benefits to users, also created exposures that warranted actions by regulators to rescue large individual participants in this market. While efforts are under way to create risk-reducing clearing mechanisms for this market, a more adaptable and responsive regulatory system might have recognized this need earlier and addressed it sooner. Some industry representatives have suggested that principles-based regulation would provide such a mechanism. Designing a system to be flexible and proactive also involves determining whether Congress, regulators, or both should make such determinations, and how such an approach should be clarified in laws or regulations. Important questions also exist about the extent to which financial regulators should actively monitor and, where necessary, approve new financial products and services as they are developed to ensure the least harm from inappropriate products. Some individuals commenting on this framework, including industry representatives, noted that limiting government intervention in new financial activities until it has become clear that a particular activity or market poses a significant risk and therefore warrants intervention may be more appropriate. As with other key policy questions, this may be answered with a combination of both approaches, recognizing that a product approval approach may be appropriate for some innovations with greater potential risk, while other activities may warrant a more reactive approach. Key issues to be addressed: Determine how to effectively monitor market developments to identify potential risks; the degree, if any, to which regulatory intervention might be required; and who should hold such a responsibility. Consider how to strike the right balance between overseeing new products as they come onto the market to take action as needed to protect consumers and investors, without unnecessarily hindering innovation. 5. Efficient and effective. A regulatory system should provide efficient oversight of financial services by eliminating overlapping federal regulatory missions, where appropriate, and minimizing regulatory burden while effectively achieving the goals of regulation. A regulatory system should provide for the efficient and effective oversight of financial services. Accomplishing this in a regulatory system involves many considerations. First, an efficient regulatory system is designed to accomplish its regulatory goals using the least amount of public resources. In this sense, policymakers must consider the number, organization, and responsibilities of each agency, and eliminate undesirable overlap in agency activities and responsibilities. Determining what is undesirable overlap is a difficult decision in itself. Under the current U.S. system, financial institutions often have several options for how to operate their business and who will be their regulator. For example, a new or existing depository institution can choose among several charter options. Having multiple regulators performing similar functions does allow for these agencies to potentially develop alternative or innovative approaches to regulation separately, with the approach working best becoming known over time. Such proven approaches can then be adopted by the other agencies. On the other hand, this could lead to regulatory arbitrage, in which institutions take advantage of variations in how agencies implement regulatory responsibilities in order to be subject to less scrutiny. Both situations have occurred under our current structure. With that said, recent events clearly have shown that the fragmented U.S. regulatory structure contributed to failures by the existing regulators to adequately protect consumers and ensure financial stability. As we note in our report, efforts by regulators to respond to the increased risks associated with new mortgage products were sometimes slowed in part because of the need for five federal regulators to coordinate their response. The Chairman of the Federal Reserve has similarly noted that the different regulatory and supervisory regimes for lending institutions and mortgage brokers made monitoring such institutions difficult for both regulators and investors. Similarly, we noted in our report that the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. One first step to addressing such problems is to seriously consider the need to consolidate depository institution oversight among fewer agencies. Since 1996, we have been recommending that the number of federal agencies with primary responsibilities for bank oversight be reduced. Such a move would result in a system that was more efficient and improve consistency in regulation, another important characteristic of an effective regulatory system. In addition, Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. We have not studied the issue of an optional federal charter for insurers, but have through the years noted difficulties with efforts to harmonize insurance regulation across states through the NAIC-based structure. The establishment of a federal insurance charter and regulator could help alleviate some of these challenges, but such an approach could also have unintended consequences for state regulatory bodies and for insurance firms as well. Also, given the challenges associated with increasingly complex investment and retail products as discussed earlier, policymakers will need to consider how best to align agency responsibilities to better ensure that consumers and investors are provided with clear, concise, and effective disclosures for all products. Organizing agencies around regulatory goals as opposed to the existing sector-based regulation may be one way to improve the effectiveness of the system, especially given some of the market developments discussed earlier. Whatever the approach, policymakers should seek to minimize conflict in regulatory goals across regulators, or provide for efficient mechanisms to coordinate in cases where goals inevitably overlap. For example, in some cases, the safety and soundness of an individual institution may have implications for systemic risk, or addressing an unfair or deceptive act or practice at a financial institution may have implications on the institution’s safety and soundness by increasing reputational risk. If a regulatory system assigns these goals to different regulators, it will be important to establish mechanisms for them to coordinate. Proposals to consolidate regulatory agencies for the purpose of promoting efficiency should also take into account any potential trade-offs related to effectiveness. For example, to the extent that policymakers see value in the ability of financial institutions to choose their regulator, consolidating certain agencies may reduce such benefits. Similarly, some individuals have commented that the current system of multiple regulators has led to the development of expertise among agency staff in particular areas of financial market activities that might be threatened if the system were to be consolidated. Finally, policymakers may want to ensure that any transition from the current financial system to a new structure should minimize as best as possible any disruption to the operation of financial markets or risks to the government, especially given the current challenges faced in today’s markets and broader economy. A financial system should also be efficient by minimizing the burden on regulated entities to the extent possible while still achieving regulatory goals. Under our current system, many financial institutions, and especially large institutions that offer services that cross sectors, are subject to supervision by multiple regulators. While steps toward consolidated supervision and designating primary supervisors have helped alleviate some of the burden, industry representatives note that many institutions face significant costs as a result of the existing financial regulatory system that could be lessened. Such costs, imposed in an effort to meet certain regulatory goals such as safety and soundness and consumer protection, can run counter to other goals of a financial system by stifling innovation and competitiveness. In addressing this concern, it is also important to consider the potential benefits that might result in some cases from having multiple regulators overseeing an institution. For example, representatives of state banking and other institution regulators, and consumer advocacy organizations, note that concurrent jurisdiction— between two federal regulators or a federal and state regulator—can provide needed checks and balances against individual financial regulators who have not always reacted appropriately and in a timely way to address problems at institutions. They also note that states may move more quickly and more flexibly to respond to activities causing harm to consumers. Some types of concurrent jurisdiction, such as enforcement authority, may be less burdensome to institutions than others, such as ongoing supervision and examination. Key issues to be addressed: Consider the appropriate role of the states in a financial regulatory system and how federal and state roles can be better harmonized. Determine and evaluate the advantages and disadvantages of having multiple regulators, including nongovernmental entities such as SROs, share responsibilities for regulatory oversight. Identify ways that the U.S. regulatory system can be made more efficient, either through consolidating agencies with similar roles or through minimizing unnecessary regulatory burden. Consider carefully how any changes to the financial regulatory system may negatively impact financial market operations and the broader economy, and take steps to minimize such consequences. 6. Consistent consumer and investor protection. A regulatory system should include consumer and investor protection as part of the regulatory mission to ensure that market participants receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. A regulatory system should be designed to provide high-quality, effective, and consistent protection for consumers and investors in similar situations. In doing so, it is important to recognize important distinctions between retail consumers and more sophisticated consumers such as institutional investors, where appropriate considering the context of the situation. Different disclosures and regulatory protections may be necessary for these different groups. Consumer protection should be viewed from the perspective of the consumer rather than through the various and sometimes divergent perspectives of the multitude of federal regulators that currently have responsibilities in this area. As discussed in our report, many consumers that received loans in the last few years did not understand the risks associated with taking out their loans, especially in the event that housing prices would not continue to increase at the rate they had in recent years. In addition, increasing evidence exists that many Americans are lacking in financial literacy, and the expansion of new and more complex products will continue to create challenges in this area. Furthermore, regulators with existing authority to better protect consumers did not always exercise that authority effectively. In considering a new regulatory system, policymakers should consider the significant lapses in our regulatory system’s focus on consumer protection and ensure that such a focus is prioritized in any reform efforts. For example, policymakers should identify ways to improve upon the existing, largely fragmented, system of regulators that must coordinate to act in these areas. This should include serious consideration of whether to consolidate regulatory responsibilities to streamline and improve the effectiveness of consumer protection efforts. Another way that some market observers have argued that consumer protections could be enhanced and harmonized across products is to extend suitability requirements—which require securities brokers making recommendations to customers to have reasonable grounds for believing that the recommendation is suitable for the customer—to mortgage and other products. Additional consideration could also be given to determining whether certain products are simply too complex to be well understood and make judgments about limiting or curtailing their use. Key issues to be addressed: Consider how prominent the regulatory goal of consumer protection should be in the U.S. financial regulatory system. Determine what amount, if any, of consolidation of responsibility may be necessary to enhance and harmonize consumer protections, including suitability requirements and disclosures across the financial services industry. Consider what distinctions are necessary between retail and wholesale products, and how such distinctions should affect how they are regulated. Identify opportunities to protect and empower consumers through improving their financial literacy. 7. Regulators provided with independence, prominence, authority, and accountability. A regulatory system should ensure that regulators have independence from inappropriate influence; have sufficient resources, clout, and authority to carry out and enforce statutory missions; and are clearly accountable for meeting regulatory goals. A regulatory system should ensure that any entity responsible for financial regulation is independent from inappropriate influence; has adequate prominence, authority, and resources to carry out and enforce its statutory mission; and is clearly accountable for meeting regulatory goals. With respect to independence, policymakers may want to consider advantages and disadvantages of different approaches to funding agencies, especially to the extent that agencies might face difficulty remaining independent if they are funded by the institutions they regulate. Under the current structure, for example, the Federal Reserve primarily is funded by income earned from U.S. government securities that it has acquired through open market operations and does not assess charges to the institutions it oversees. In contrast, OCC and OTS are funded primarily by assessments on the firms they supervise. Decision makers should consider whether some of these various funding mechanisms are more likely to ensure that a regulator will take action against its regulated institutions without regard to the potential impact on its own funding. With respect to prominence, each regulator must receive appropriate attention and support from top government officials. Inadequate prominence in government may make it difficult for a regulator to raise safety and soundness or other concerns to Congress and the administration in a timely manner. Mere knowledge of a deteriorating situation would be insufficient if a regulator were unable to persuade Congress and the administration to take timely corrective action. This problem would be exacerbated if a regulated institution had more political clout and prominence than its regulator because the institution could potentially block action from being taken. In considering authority, agencies must have the necessary enforcement and other tools to effectively implement their missions to achieve regulatory goals. For example, in a 2007 report we expressed concerns over the appropriateness of having OTS oversee diverse global financial firms given the size of the agency relative to the institutions for which it was responsible. It is important for a regulatory system to ensure that agencies are provided with adequate resources and expertise to conduct their work effectively. A regulatory system should also include adequate checks and balances to ensure the appropriate use of agency authorities. With respect to accountability, policymakers may also want to consider different governance structures at agencies—the current system includes a combination of agency heads and independent boards or commissions— and how to ensure that agencies are recognized for successes and held accountable for failures to act in accordance with regulatory goals. Key issues to be addressed: Determine how to structure and fund agencies to ensure each has adequate independence, prominence, tools, authority and accountability. Consider how to provide an appropriate level of authority to an agency while ensuring that it appropriately implements its mission without abusing its authority. Ensure that the regulatory system includes effective mechanisms for holding regulators accountable. 8. Consistent financial oversight. A regulatory system should ensure that similar institutions, products, risks, and services are subject to consistent regulation, oversight, and transparency, which should help minimize negative competitive outcomes while harmonizing oversight, both within the United States and internationally. A regulatory system should ensure that similar institutions, products, and services posing similar risks are subject to consistent regulation, oversight, and transparency. Identifying which institutions and which of their products and services pose similar risks is not easy and involves a number of important considerations. Two institutions that look very similar may in fact pose very different risks to the financial system, and therefore may call for significantly different regulatory treatment. However, activities that are done by different types of financial institutions that pose similar risks to their institutions or the financial system should be regulated similarly to prevent competitive disadvantages between institutions. Streamlining the regulation of similar products across sectors could also help prepare the United States for challenges that may result from increased globalization and potential harmonization in regulatory standards. Such efforts are under way in other jurisdictions. For example, at a November 2008 summit in the United States, the Group of 20 countries pledged to strengthen their regulatory regimes and ensure that all financial markets, products, and participants are consistently regulated or subject to oversight, as appropriate to their circumstances. Similarly, a working group in the European Union is slated by the spring of 2009 to propose ways to strengthen European supervisory arrangements, including addressing how their supervisors should cooperate with other major jurisdictions to help safeguard financial stability globally. Promoting consistency in regulation of similar products should be done in a way that does not sacrifice the quality of regulatory oversight. As we noted in a 2004 report, different regulatory treatment of bank and financial holding companies, consolidated supervised entities, and other holding companies may not provide a basis for consistent oversight of their consolidated risk management strategies, guarantee competitive neutrality, or contribute to better oversight of systemic risk. Recent events further underscore the limitations brought about when there is a lack of consistency in oversight of large financial institutions. As such, Congress and regulators will need to seriously consider how best to consolidate responsibilities for oversight of large financial conglomerates as part of any reform effort. Key issues to be addressed: Identify institutions and products and services that pose similar risks. Determine the level of consolidation necessary to streamline financial regulation activities across the financial services industry. Consider the extent to which activities need to be coordinated internationally. 9. Minimal taxpayer exposure. A regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. A regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. Policymakers should consider identifying the best safeguards and assignment of responsibilities for responding to situations where taxpayers face significant exposures, and should consider providing clear guidelines when regulatory intervention is appropriate. While an ideal system would allow firms to fail without negatively affecting other firms— and therefore avoid any moral hazard that may result—policymakers and regulators must consider the realities of today’s financial system. In some cases, the immediate use of public funds to prevent the failure of a critically important financial institution may be a worthwhile use of such funds if it ultimately serves to prevent a systemic crisis that would result in much greater use of public funds in the long run. However, an effective regulatory system that incorporates the characteristics noted above, especially by ensuring a systemwide focus, should be better equipped to identify and mitigate problems before it become necessary to make decisions about whether to let a financial institution fail. An effective financial regulatory system should also strive to minimize systemic risks resulting from interrelationships between firms and limitations in market infrastructures that prevent the orderly unwinding of firms that fail. Another important consideration in minimizing taxpayer exposure is to ensure that financial institutions provided with a government guarantee that could result in taxpayer exposure are also subject to an appropriate level of regulatory oversight to fulfill their responsibilities. Key issues to be addressed: Identify safeguards that are most appropriate to prevent systemic crises while minimizing moral hazard. Consider how a financial system can most effectively minimize taxpayer exposure to losses related to financial instability. Finally, although significant changes may be required to modernize the U.S. financial regulatory system, policymakers should consider carefully how best to implement the changes in such a way that the transition to a new structure does not hamper the functioning of the financial markets, individual financial institutions’ ability to conduct their activities, and consumers’ ability to access needed services. For example, if the changes require regulators or institutions to make systems changes, file registrations, or other activities that could require extensive time to complete, the changes could be implemented in phases with specific target dates around which the affected entities could formulate plans. In addition, our past work has identified certain critical factors that should be addressed to ensure that any large-scale transitions among government agencies are implemented successfully. Although all of these factors are likely important for a successful transformation for the financial regulatory system, Congress and existing agencies should pay particular attention to ensuring there are effective communication strategies so that all affected parties, including investors and consumers, clearly understand any changes being implemented. In addition, attention should be paid to developing a sound human capital strategy to ensure that any new or consolidated agencies are able to retain and attract additional quality staff during the transition period. Finally, policymakers should consider how best to retain and utilize the existing skills and knowledge base within agencies subject to changes as part of a transition. Chair Warren and Members of the Panel, I appreciate the opportunity to discuss these critically important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Orice M. Williams at (202) 512-8678 or williamso@gao.gov, or Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov. Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System. GAO-09-216. Washington, D.C.: January 8, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Hedge Funds: Regulators and Market Participants Are Taking Steps to Strengthen Market Discipline, but Continued Attention Is Needed. GAO-08-200. Washington, D.C.: January 24, 2008. Information on Recent Default and Foreclosure Trends for Home Mortgages and Associated Economic and Market Developments. GAO-08-78R. Washington, D.C.: October 16, 2007. Financial Regulation: Industry Trends Continue to Challenge the Federal Regulatory Structure. GAO-08-32. Washington, D.C.: October 12, 2007. Financial Market Regulation: Agencies Engaged in Consolidated Supervision Can Strengthen Performance Measurement and Collaboration. GAO-07-154. Washington, D.C.: March 15, 2007. Alternative Mortgage Products: Impact on Defaults Remains Unclear, but Disclosure of Risks to Borrowers Could Be Improved. GAO-06-1021. Washington, D.C.: September 19, 2006. Credit Cards: Increased Complexity in Rates and Fees Heightens Need for More Effective Disclosures to Consumers. GAO-06-929. Washington, D.C.: September 12, 2006. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. Consumer Protection: Federal and State Agencies Face Challenges in Combating Predatory Lending. GAO-04-280. Washington, D.C.: January 30, 2004. Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk. GAO/GGD-00-3. Washington, D.C.: October 29, 1999. Bank Oversight: Fundamental Principles for Modernizing the U.S. Structure. GAO/T-GGD-96-117. Washington, D.C.: May 2, 1996. Financial Derivatives: Actions Needed to Protect the Financial System. GAO/GGD-94-133. Washington, D.C.: May 18, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses GAO's January 8, 2009, report that provides a framework for modernizing the outdated U.S. financial regulatory system. GAO prepared this work under the authority of the Comptroller General to help policymakers weigh various regulatory reform proposals and consider ways in which the current regulatory system could be made more effective and efficient. This testimony (1) describes how regulation has evolved in banking, securities, thrifts, credit unions, futures, insurance, secondary mortgage markets and other important areas; (2) describes several key changes in financial markets and products in recent decades that have highlighted significant limitations and gaps in the existing regulatory system; and (3) presents an evaluation framework that can be used by Congress and others to shape potential regulatory reform efforts. The current U.S. financial regulatory system has relied on a fragmented and complex arrangement of federal and state regulators--put into place over the past 150 years--that has not kept pace with major developments in financial markets and products in recent decades. Today, almost a dozen federal regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies share responsibility for overseeing the financial services industry. As the nation finds itself in the midst of one of the worst financial crises ever, it has become apparent that the regulatory system is ill-suited to meet the nation's needs in the 21st century. Several key changes in financial markets and products in recent decades have highlighted significant limitations and gaps in the existing regulatory system. First, regulators have struggled, and often failed, to mitigate the systemic risks posed by large and interconnected financial conglomerates and to ensure they adequately manage their risks. Second, regulators have had to address problems in financial markets resulting from the activities of large and sometimes less-regulated market participants--such as nonbank mortgage lenders, hedge funds, and credit rating agencies--some of which play significant roles in today's financial markets. Third, the increasing prevalence of new and more complex investment products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Fourth, standard setters for accounting and financial regulators have faced growing challenges in ensuring that accounting and audit standards appropriately respond to financial market developments, and in addressing challenges arising from the global convergence of accounting and auditing standards. Finally, as financial markets have become increasingly global, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. These significant developments have outpaced a fragmented and outdated regulatory structure, and, as a result, significant reforms to the U.S. regulatory system are critically and urgently needed. The current system has significant weaknesses that, if not addressed, will continue to expose the nation's financial system to serious risks. Our report offers a framework for crafting and evaluating regulatory reform proposals consisting of nine characteristics that should be reflected in any new regulatory system. By applying the elements of the framework, the relative strengths and weaknesses of any reform proposal should be better revealed, and policymakers should be able to focus on identifying trade-offs and balancing competing goals. Similarly, the framework could be used to craft proposals, or to identify aspects to be added to existing proposals to make them more effective and appropriate for addressing the limitations of the current system.
Like cocaine, heroin is produced outside the United States and is smuggled into the country illegally. Trafficking in both drugs has spawned several criminal industries, including money laundering, organized crime syndicates, and associated smuggling operations. Opium poppies, from which heroin is derived, are grown primarily in three regions of the world—Southeast Asia, Southwest Asia, and Latin America. Heroin is produced in a variety of geographic regions and trafficking routes are more geographically dispersed than cocaine. Unlike most South American cocaine organizations, heroin trafficking organizations are not vertically integrated and heroin shipments rarely remain under the control of a single individual or organization as they move from the overseas refinery to the streets of the United States. The principal source of heroin consumed in the United States is Southeast Asia, most of which originates in one country—Burma. According to the Office of National Drug Control Policy (ONDCP), in fiscal year 1993, the United States spent an estimated $52.3 million, or about 10 percent of the international narcotics control budget, on international heroin control activities. In fiscal year 1994, ONDCP estimated the United States spent $47.6 million on international heroin control activities or about 14 percent of the international narcotics control budget. U.S. heroin control programs have the following general objectives: (1) assisting source countries in attacking opium production and heroin refining, trafficking, and use; (2) gaining greater access to opium-producing regions through bilateral and multilateral initiatives; (3) pooling U.S. intelligence resources to assist U.S. and foreign law enforcement agencies in targeting and arresting key leaders of major heroin trafficking organizations; and (4) reducing the flow of heroin into the United States. Current efforts focus on Southeast Asia because it is the primary source of heroin smuggled into the United States. ONDCP views heroin as a serious danger to the United States, a threat second only to cocaine. ONDCP reports that Americans consume an estimated 10 to 15 metric tons of heroin annually, an increase from the estimated 5 tons consumed each year in the mid-1980s. Heroin abuse has increased due to the wider availability of high-quality heroin at low retail or street prices. From 1987 to 1994, the estimated worldwide production of opium grew from 2,242 metric tons to 3,409 metric tons. The two leading source countries, Burma and Afghanistan, are responsible for much of this increase. For example, in 1994, Burma produced about 2,030 metric tons of opium, or about 60 percent of worldwide production. The Department of State estimates that this amount of opium could be refined into approximately 169 metric tons of heroin, enough to meet U.S. demand many times over. Although Burma’s 1994 production was limited by adverse weather conditions, a recent survey in Burma indicates a resurgence in production during the 1995 growing season that will approach record levels. Figure 1 shows recent worldwide trends in opium production in the primary source countries; figure 2 shows the primary opium poppy cultivation areas in Southeast Asia. In recent years, the purity of heroin available on U.S. streets has risen significantly, while prices have fallen. This combination is a key indicator of the increasing availability of heroin in the United States. In its August 1995 annual report, the National Narcotics Intelligence Consumers Committee stated that the nationwide average purity for retail heroin was 40 percent in 1994, a dramatic increase from the single-digit purity levels of the mid-1980s and much higher than the 26.6-percent purity level reported in 1991. In New York City, the largest importation and distribution center in the United States for Southeast Asian heroin, average purity levels have risen from 34 percent in 1988 to 63 percent in mid-1994. This rise in overall purity levels has been attributed to the increased availability of high-quality Southeast Asian and South American heroin. While purity levels have risen, heroin prices have fallen to their lowest levels ever, according to ONDCP. For example, DEA reports that heroin prices in New York City dropped from $1.81 per milligram in 1988 to $0.37 by mid-1994. U.S. counternarcotics officials believe heroin’s greater availability is allowing increased experimentation with a highly addictive drug. Moreover, the higher purity levels permit users to ingest heroin through nasal inhalation versus injection with hypodermic syringes. Users find inhalation attractive because it is easier than injection, and they can avoid contracting the diseases associated with using needles. The U.S. heroin user population may be increasing in response to the increased availability of heroin. ONDCP estimates there are up to 600,000 hardcore heroin addicts in the United States. While there is no evidence suggesting there is an epidemic of new users, reports indicate that the heroin user population may be gradually increasing. Much of this increase is among drug users whose prime drug of abuse is not heroin. ONDCP reports that this link is especially strong for long-term users of “crack” cocaine, who use heroin to counter the depressive effects of withdrawal from cocaine use. Furthermore, data on heroin-related emergency room visits show that the problems associated with long-term heroin use are also on the rise. For example, the annual number of emergency room episodes involving heroin increased from 42,000 in 1989 to almost 63,000 in 1993, a 50-percent increase. According to the Substance Abuse and Mental Health Services Administration, emergency room admissions for heroin abuse in Baltimore alone increased 364 percent from 1989 to 1993. The U.S. international heroin strategy, signed by the President on November 21, 1995, calls for a regional approach focused on Southeast Asia and the need to reduce opium production in Burma to stop the flow of heroin into the United States. The objectives of the new strategy remain similar to the earlier objectives. The implementation of the Burma portion of the strategy relies on the development of counternarcotics dialogue with Burmese authorities, exchange of counternarcotics information, in-country counternarcotics training, and continued support for UNDCP efforts. Implementation guidelines for the new strategy are currently under review and it is not clear at this point to what extent resources will be dedicated to support the strategy. As noted in the strategy, Burma remains the key to successful regional heroin control efforts, due to its status as the world’s leading heroin producer. However, the United States does not provide significant counternarcotics assistance to Burma because of its record of human rights abuses and the Burmese military dictatorship is not equipped to address ethnic disputes that impact on development of an effective regional program. Moreover, difficulties in tracking and interdicting heroin-trafficking organizations have limited the effectiveness of international law enforcement efforts against the criminal organizations responsible for moving the drug from Southeast Asia into the United States. In addition, poor law enforcement cooperation between the United States and China demonstrates the difficulties in interdicting key heroin-trafficking routes. Despite these obstacles, U.S. efforts have achieved some positive results in countries or territories with sufficient will to implement counternarcotics activities, such as Thailand and Hong Kong. The key to effective U.S. heroin control efforts in Southeast Asia is stopping the flow of Burmese heroin into the United States. In 1994, Burma accounted for about 87 percent of the opium cultivated in Southeast Asia and approximately 94 percent of the opium production in the region. Most of the heroin smuggled into the United States originates in Burma’s eastern Shan State. Unless the United States addresses opium poppy cultivation and production in Burma, U.S. regional heroin control efforts will have only a marginal impact. However, several factors create substantial difficulties in establishing effective programs in Burma. U.S. policy toward Burma prohibits providing significant levels of counternarcotics assistance until the Burmese government improves its human rights stance and recognizes the democratic process. In addition, the Burmese government does not control the majority of opium cultivation areas within its borders and has not seriously pursued opium reduction efforts on its own. Moreover, ethnic insurgent armies that control most of the opium cultivation and heroin-trafficking areas are reliant on proceeds from the drug trade and are unlikely to relinquish this source of income under the current Burmese government. In response to Burmese government human rights abuses and unwillingness to restore democratic government, the United States has terminated almost all counternarcotics assistance. In 1988, the Burmese military violently suppressed antigovernment demonstrations for economic and political reform and began establishing a record of human rights abuses, including politically motivated arrests, torture, and forced labor and relocations. In 1990, the Burmese people voted to replace the government in national elections, but the military regime refused to recognize the results and remained in power. Further, for decades, the Burmese government has engaged in fighting with insurgent armies representing ethnic minority groups who want autonomous control of territory they occupy within Burma’s borders. Some of these groups, particularly the Wa people of Burma’s eastern Shan State, control major opium production and heroin trafficking areas and have fought successfully to maintain their independence from the central government. Over the past 8 years, the military regime has consolidated its control and virtually eliminated any threat to its power in Rangoon. In 1988, the United States discontinued foreign aid to Burma in response to concerns over human rights abuses by the Burmese government. U.S. assistance had supported the Burmese government’s opium poppy eradication program during fiscal years 1974 through 1988. In response to the Burmese government’s insufficient efforts to address increasing opium production and heroin-trafficking within its borders, the President has denied certification for counternarcotics cooperation since 1989. While the United States does not provide direct counternarcotics funding support, limited U.S. assistance has continued through low-level counternarcotics cooperation between Burmese law enforcement authorities and DEA. For example, DEA shares drug intelligence with the Burmese police on a case-by-case basis and conducted a basic drug enforcement training seminar in December 1994. In August 1995, a training course was offered to Burmese law enforcement officials on customs screening and interdiction techniques. These activities are closely monitored by the U.S. embassy in Rangoon to ensure the Burmese government does not interpret the cooperation as a sign that the United States is deemphasizing its policy priorities of furthering human rights and democratization. Although law enforcement cooperation is needed to upgrade a poorly equipped and trained Burmese police force and establish information sharing, U.S. counternarcotics officials believe that the key to stopping the flow of Burmese heroin into the United States is through crop eradication and alternative development support. More importantly, because of the complex Burmese political environment, U.S. assistance is unlikely to be effective until the Burmese government demonstrates improvement in its democracy and human rights policies and proves its legitimacy to ethnic minority groups in opium producing areas. In October 1995, the Assistant Secretary of State for International Narcotics and Law Enforcement Affairs stated that in the long run, an accountable Burmese government that enjoys legitimacy in opium-growing areas will be more willing and able to crack down on the drug trade. In furthering its consolidation of power, the Burmese government has also furthered opium production and heroin-trafficking activities through cease-fire agreements it has signed with some ethnic insurgent armies. According to the Department of State, in 1989, the Burmese government reached a cease-fire agreement with the United Wa State Army (UWSA), which controls 80 percent of the opium cultivation areas in Burma. In the cease-fire, UWSA agreed to stop its armed insurgency against the government in exchange for government acquiescence to Wa control of Wa territory. According to the Department of State, the agreement also stipulated that the Wa would give up their participation in the drug trade and that the Burmese government would provide developmental support to assist the Wa in raising their standard of living. Other minority groups in opium poppy cultivation areas, such as the Kokang, have reached similar accommodations with the Burmese government. The Burmese government and UWSA have done little to pursue counternarcotics initiatives. For example, the government discontinued its aerial eradication program with the cutoff of U.S. assistance in 1988 and has only conducted limited eradication efforts in areas under its control since that time. In September 1994, the government proposed an 11-year plan for developmental assistance that also included crop eradication in cultivation areas. However, according to the Department of State, the plan does not provide details on how eradication will occur, and the government lacks adequate resources to support its proposal. Since 1988, opium production has nearly doubled in Burma, and UWSA has become one of the world’s leading heroin-trafficking organizations. With a force of 15,000 troops, it provides security for Wa territory while controlling up to 80 percent of Burma’s opium crop. UWSA relies on the proceeds from its extensive involvement in the drug trade to fund procurement of munitions and equipment. UWSA is involved in heroin refining and maintains contact with an extensive international drug-trafficking infrastructure to move its heroin out of Burma and into foreign markets. While elements of the Wa political leadership have recently proposed relinquishing participation in opium poppy cultivation and heroin trafficking in exchange for direct developmental assistance from the United States and other potential donors, it is questionable whether UWSA leadership would seriously consider doing so. Such a decision would mean giving up the major funding source that allows it to maintain its army and protect the Wa people from potential renewed aggression from the Burmese government. To equip and maintain its military force, UWSA depends on funds generated from taxes on opium that Wa farmers cultivate and produce. Without these tax revenues, UWSA would have serious funding problems. UWSA has no incentive to reduce its size or end its involvement in opium trafficking until (1) alternative sources of income are found to replace opium-generated revenues or (2) the threat of Burmese government aggression is diminished or removed. Neither of these possibilities appears likely to happen. The Burmese government has been in armed conflict with another major heroin-trafficking organization operating within its borders—the Shan United Army (SUA) located in the Shan State on Burma’s border with Thailand. SUA has a force of about 10,000 soldiers to defend extensive heroin-refining facilities and drug-trafficking routes into Thailand, Laos, and Cambodia. While SUA claims to be fighting for Shan State independence, until recently, the Burmese government has chosen not to accommodate this group as it has done with other ethnic minority groups. Instead, the government increased military efforts against SUA in late 1993. The conflict has caused significant casualties on both sides and disrupted SUA drug-trafficking and -refining operations. Despite these successes, the operations have had limited impact on the flow of drugs out of Burma. According to Department of State officials, in January 1996, the Burmese army and SUA ended their armed conflict in accordance with a recent cease-fire agreement. The cease-fire will cause temporary disruptions in SUA’s narcotics trafficking operations, but it is difficult to determine the long-term effects of the agreement on the flow of Burmese heroin. According to DEA, each heroin producing region has separate and distinct distribution methods that are highly dependent on ethnic groups, transportation modes, and surrounding transit countries. These factors combine to make the detection, monitoring, and interdiction of heroin extremely difficult. Heroin-trafficking organizations are not vertically integrated, and heroin shipments rarely remain under the control of a single individual or organization as they move from the overseas refinery to the streets of the United States. These organizations consist of separate producers and a number of independent intermediaries such as financiers, brokers, exporters, importers, and distributors. Since responsibility and ownership of a particular drug shipment shifts each time the product changes hands, direct evidence of the relationship between producer, transporter, and wholesale distributor is extremely difficult to obtain. From Southeast Asia, heroin is transported to the United States primarily by ethnic Chinese and West African drug-trafficking groups. According to DEA, the ethnic Chinese groups are capable of moving multi-hundred kilogram shipments, while the West African groups usually smuggle heroin in smaller quantities. Generally, the shipment size determines the smuggling method. The larger shipments, ranging from 50 to multi-hundred kilogram quantities, are secreted in containerized freight aboard commercial maritime vessels and air freight cargo. Smaller shipments are concealed in the luggage of airline passengers, strapped to the body, or swallowed. The impact of U.S. efforts to interdict regional drug-trafficking routes has been limited by the ability of traffickers to shift their routes into countries with inadequate law enforcement capability. For example, Thailand’s well-developed transportation system formerly made it the traditional transit route for about 80 percent of the heroin moving out of Southeast Asia. However, in response to increased Thai counternarcotics capability and stricter border controls, this amount has declined to 50 percent in recent years as new drug-trafficking routes have emerged through the southern provinces of China to Taiwan and Hong Kong or through Laos, Cambodia, and Vietnam (see fig. 3). Similarly, cooperation between U.S. and Hong Kong law enforcement authorities has helped reduce the use of Hong Kong as a transshipment point for Southeast Asian heroin, but law enforcement weaknesses in China and Taiwan have encouraged drug traffickers to shift supply routes into these countries. Until law enforcement efforts aimed at heroin-trafficking organizations and drug-trafficking routes can be coordinated regionally, the flow of Southeast Asian heroin to the United States will likely continue unabated. Inadequate Chinese cooperation with U.S. law enforcement also limits the impact of regional U.S. heroin control efforts. DEA has identified a substantial increase in the use of drug-trafficking routes for Burmese heroin through China and believes that closer interaction with Chinese law enforcement authorities is essential. DEA has attempted to increase drug intelligence sharing with Chinese authorities and has conducted a number of law enforcement training seminars to (1) develop better information about trafficking methods and routes, (2) augment the number of arrests and seizures, and (3) enhance Chinese police capabilities. However, according to DEA officials, Chinese cooperation has been reluctant and limited. For example, the Chinese government requires that DEA funnel all communications through a single point of contact at the Ministry of Public Security in Beijing before dissemination to local provincial police units for action. The resulting delay slows dispersal of counternarcotics intelligence, thus making it difficult to undertake joint investigations and make timely arrests and seizures in China. Further, DEA has had difficulty measuring the usefulness of the information it provides to Chinese authorities because the Chinese do not provide feedback on whether it has proven accurate. This lack of responsiveness may be attributed, at the local level, to insufficient manpower and to the lack of sophisticated computer and communications equipment. Despite the lack of communication, DEA officials believe Chinese authorities have made some arrests and seizures based on DEA-provided information. Finally, the Ministry of Public Security has not shared information about its independent interdiction efforts, arrests, and prosecutions, or any counternarcotics intelligence it has developed that could possibly assist DEA investigations. Furthermore, it is possible that the 1997 transition of Hong Kong from British to Chinese control will complicate U.S. counternarcotics activities in the region. The four-person DEA office in Hong Kong is currently responsible for covering counternarcotics activity in Hong Kong, China, Taiwan, and Macau. However, after the 1997 transition, DEA will be required to cover China from an office at the U.S. embassy in Beijing. While the State Department has approved the opening of a two-person DEA office at the embassy (one special agent and one administrative assistant), it is still unclear when the positions will be filled and the degree of movement that will be afforded DEA personnel within China. Also, the Chinese government is unlikely to approve continued regional coverage of Taiwan from Hong Kong or the office in Beijing. As a result, DEA’s ability to assist other countries in the region in interdicting heroin-trafficking routes opened through southern China and Taiwan may be constrained greatly. While the impact of U.S. heroin control efforts on a regional level in Southeast Asia has been limited, some U.S. counternarcotics assistance programs in countries that possess the political will and capability to engage in counternarcotics activities have achieved positive results. In Thailand, for example, we found that sustained U.S. support since the early 1970s and good relations with the Thai government have contributed to abatement of opium production and heroin trafficking. Examples of effective U.S. counternarcotics activities in Thailand include the following: Through $16.5 million in Department of State supported efforts since 1978, the Thai government has reduced opium production levels from an estimated 150 to 200 metric tons in the 1970s to 17 metric tons in 1994. As a result, Thai traffickers no longer produce significant amounts of heroin for export. Successful law enforcement training programs funded by the Department of State, and support for Thai counternarcotics institutions provided primarily by DEA, have enhanced Thailand’s drug law enforcement capability. For example, using U.S. assistance, the Thai police captured 10 key members of Burma’s SUA heroin-trafficking organization in November 1994. The United States also has provided support for the establishment of a task force in northern Thailand that should foster intelligence analysis and information sharing among Thai counternarcotics police organizations. According to U.S. embassy officials, U.S. assistance has helped Thailand assume a leadership role in regional heroin control efforts. For example, in 1994, the Thai government implemented tighter controls at checkpoints on the Burma border. This ongoing effort has restricted heroin-trafficking routes into northern Thailand that SUA uses. The Thai police also have sponsored drug law enforcement training for other countries in the region. In Hong Kong, the professionalism of the Hong Kong police and the absence of drug cultivation limit the need for U.S. counternarcotics assistance, which, to date, has focused on law enforcement support from DEA. The sharing of DEA intelligence with Hong Kong law enforcement authorities has resulted in the seizure of heroin shipments destined for the United States and the capture of major drug traffickers. The U.S. and Hong Kong governments also have worked closely to arrange extraditions of drug traffickers to the United States for trial. Moreover, according to DEA, Hong Kong has enacted legislation that has enhanced counternarcotics cooperation with the United States. For example, a 1989 law allows the Hong Kong police, pursuant to confiscation orders, to seize assets of convicted drug offenders. A bilateral agreement also permits seized assets to be shared between Hong Kong and the United States. As of August 1995, Hong Kong had frozen or confiscated approximately $54 million in drug traffickers’ assets under this agreement. Of this amount, the seizure of at least $26 million in assets was based on information that U.S. law enforcement agencies provided. A key element of U.S. heroin control efforts is the increasing reliance the United States places on international organizations, such as the United Nations, in countries where the United States faces significant obstacles in providing traditional bilateral counternarcotics assistance. In Burma, the United States has been a major donor for UNDCP drug control projects, providing about $2.5 million dollars from fiscal years 1992 through 1994. However, we found that the projects have not significantly reduced opium production because (1) the scope of the projects has been too small to have a substantive impact on opium production, (2) the Burmese government has not provided sufficient support to ensure project success, and (3) inadequate planning has reduced project effectiveness. UNDCP’s project in Burma to reduce opium production created small “opium-free zones” in certain areas of Wa territory. According to U.S. government and other officials, the opium-free zones are merely demonstration projects; they will have no substantive impact on opium production. The zones are located typically along roadways where it is easy to verify that opium is not being cultivated. However, the officials told us that the farmers simply move their planting sites to other areas, usually ones that are in more remote areas. Further, UNDCP projects have not significantly reduced opium production because of a lack of significant voluntary or forcible eradication. UNDCP has also experienced difficulties in obtaining sufficient Burmese government support for its projects in the Wa territory, which has reduced their effectiveness. As part of the project agreements, the Burmese government stated it would provide in-kind resources to support UNDCP activities. However, UNDCP officials told us that the Burmese government did not furnish the necessary civil engineering personnel or basic commodities, such as fuel, that it had committed to supply. As a result, UNDCP had to hire outside people at additional cost. In addition, the Burmese government has not always cooperated in granting UNDCP worker access to the project areas. Additionally, inadequate planning has reduced project effectiveness. For example, according to UNDCP officials, aerial surveys of areas designated for opium poppy crop reduction were not conducted until March 1995, 18 months after the projects began. As a result, it will not be possible to evaluate accurately the effectiveness of the supply reduction projects because UNDCP did not establish any baseline data at the outset. Further, the projects lacked measurable benchmarks, such as timetables for eliminating opium poppy fields, and plans were not developed to follow up on eradication efforts to ensure that opium poppy cultivation had not resumed in areas where opium poppy plants were destroyed. Despite these problems, U.S. counternarcotics officials believe that UNDCP projects offer the only alternatives to U.S.-funded opium poppy crop eradication and alternative development programs in Burma at the present time. Further, the projects are allowing UNDCP access to the Wa. This access could prove useful if the political environment within Burma changes and creates new opportunities for implementing drug control efforts. In fact, UNDCP is expanding its current efforts, with a 5-year, $22 million project that will include a supply reduction component. U.S. and UNDCP officials told us that the supply reduction component will provide for aerial surveys to determine cultivation levels and establish a baseline to measure progress during the life of the project. Further, these officials believe that the project should include measurable benchmarks for reduction of opium poppy cultivation in designated areas to ensure that successful eradication is taking place as well as provisions to ensure that UNDCP workers have easy access to project areas. According to a Department of State official, the United States plans to provide additional funding over a 5-year period to increase UNDCP efforts in the region, but the exact amount is still under consideration. However, it is doubtful, for reasons already stated, that these projects will significantly reduce opium production. ONDCP stated that the report provided an excellent analysis as to why heroin control is a major foreign policy objective of the United States and presents an accurate portrayal of the current worldwide heroin-trafficking situation. (See app.II for ONDCP comments.) ONDCP stated that heroin control is a vital national security interest and that the U.S. government has to work with undemocratic governments such as Burma, Afghanistan, China, and Syria in furtherance of international narcotics control. The Department of State stated that ethnic insurgent armies are unlikely to relinquish drug income under any Burmese government absent strong and effective law enforcement efforts and these efforts may require large-scale sustained military operations. (See app. III for Department of State comments.) Both the Department of State and ONDCP noted that congressional pressure has constrained the U.S. counternarcotics effort and recently passed legislation further restricts what the United States could do in Burma. ONDCP, the Department of State, and DEA (see app. IV for DEA comments) provided updated information on an agreement between the SUA and the Burmese authorities that is, according to the Department of State, likely to allow SUA to continue its narcotics-related activities. We recognize that the U.S government may at times have to deal with undemocratic governments. However, in our review, the issue in heroin drug trafficking is how effective alternative development, law enforcement training, and intelligence-sharing activities can be with the current Burmese government. As noted in our report, the current Burmese government does not control most of the opium poppy growing regions, is unlikely to obtain international support for either large-scale alternative development or sustained military campaigns against ethnic armies, and has entered into truce agreements with ethnic groups allowing them to continue narcotics-related activities. With regard to congressional pressure and recently passed legislation, it should be noted that both the Clinton and Bush administrations made policy decisions not to provide additional assistance to the Burmese government in response to its anti-democratic policies and human rights abuses. It is unclear what can be accomplished with assistance to a government that is either unwilling or unable to take effective action against those ethnic groups responsible for opium poppy cultivation and heroin production. We have attached more detailed comments in appendixes II through IV. We conducted our review from February 1995 through January 1996 in accordance with generally accepted government auditing standards. The scope and methodology for our review is discussed in appendix V. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretaries of State and Defense; the Administrator, Drug Enforcement Administration; the Director, Office of National Drug Control Policy; and other interested congressional committees. Copies will also be made available to other interested parties upon request. If you or your staff have any questions concerning this report, I can be reached on (202) 512-4268. The major contributors to this report are listed in appendix VI. In 1993, Burma’s ethnic Wa people proposed to the international community that the Wa people would cease opium production if they were to receive direct assistance during a transitional period in which they would attempt to move away from using opium production as their primary source of income. We examined the proposal and found that the feasibility of providing direct assistance to the Wa people is limited. Numerous obstacles would hinder the implementation and monitoring of assistance programs. These obstacles include (1) U.S. legislation and policy that restrict U.S. government involvement in Burma; (2) opposition by the government of Burma; and (3) opposition by the United Wa State Army (UWSA), which controls the territory occupied by the Wa people. Moreover, the ability to overcome these obstacles will be limited until the government of Burma has access to all areas, including those that ethnic insurgents control. In addition, the United States is currently funding counternarcotics efforts of the United Nations International Drug Control Program (UNDCP) in Burma. However, according to numerous officials, UNDCP’s efforts in Burma are merely showpieces. They have not had, and will not have, a substantive impact on reducing opium poppy cultivation and heroin production because (1) they are small programs relative to the large size of the problem, (2) the government of Burma does not have access to many areas in which opium is cultivated, and, (3) UWSA would not allow UNDCP to reduce opium production substantially. The Wa people are an ethnic minority group of about 1 million poor subsistence farmers living in an isolated, mountainous area of eastern Burma—a Southeast Asian nation of about 35 million people that is slightly smaller than the state of Texas. The current regime, known as the State Law and Order Restoration Council (SLORC), is comprised mostly of ethnic Burmans and has been largely unsuccessful in its efforts to overcome the Wa insurgency. SLORC has had no control over Wa territory since 1989, when it abdicated its governance after years of fighting and signed a cease-fire agreement with Wa leaders. This enabled the Wa people to openly cultivate opium poppies with no government interference. Many Wa farmers cultivate opium poppies and sell their harvest to drug traffickers. In recent years, opium grown in Wa territory has increased dramatically to the point that, currently, the Wa opium crop is the largest in the world. The Wa people have cultivated opium poppies for generations. Since the signing of the cease-fire with SLORC, however, the Wa have substantially augmented opium production. Specifically, in 1995, over 85 percent of opium poppy cultivation in Southeast Asia occurred in Burma, and cultivation in Wa territory accounted for over 80 percent of Burma’s cultivation. Despite the increase, however, Wa farmers have experienced little, if any, change in their economic status because Wa leaders strongly encourage them to grow opium poppies, levy taxes on their harvest, and use the tax revenues to support UWSA. Little, if any, tax revenue has been used for badly needed development. Elements of UWSA are comprised of many of the fighting forces of the former Communist Party of Burma (CPB). For many years, Communist China supported CPB, including providing (1) food, mainly rice, that enabled the Wa people to maintain a subsistence existence with little dependence on cash generated from opium cultivation and (2) military equipment that had enabled the Wa people to successfully defend Wa territory against SLORC. However, following the collapse of communism worldwide and the subsequent withdrawal of support for the CPB by Communist China, UWSA was formed. UWSA relies on funds derived from opium trafficking to buy arms and support its forces. The withdrawal of support from Communist China, combined with the SLORC’s unfulfilled promises of development assistance, has resulted in hardships for many of the Wa’s subsistence farmers. U.S. legislation and policy restrict the level of assistance to the government of Burma. The restrictions are based largely on the SLORC’s (1) insufficient progress in stopping opium cultivation and heroin trafficking within its borders, (2) record of human rights violations, and (3) refusal to install a democratically elected government. Before SLORC took over the government of Burma, the United States was supporting counternarcotics activities in Burma. However, we reported in September 1989 that, “eradication and enforcement efforts are unlikely to significantly reduce Burma’s opium production unless they are combined with economic development in the growing regions and the political settlement of Burma’s ethnic insurgencies.” Regardless of the U.S. position, SLORC is nonetheless the recognized government of Burma, and Wa territory is considered to be part of Burma. As such, bilateral U.S. assistance to the Wa people would require the SLORC’s knowledge and consent. However, according to U.S. government officials, SLORC would strongly oppose direct U.S. assistance to the Wa people. The officials stated that SLORC would react with anger and regard such direct assistance as a violation of their sovereignty. Furthermore, because of U.S. policy, which strongly criticizes Burma’s human rights violations and SLORC’s refusal to install a democratically elected government, U.S. counternarcotics assistance efforts in Burma are nearly nonexistent. Because of the common border between Burma and China, U.S. assistance to the Wa people could be provided directly into Wa territory through a cross-border program from China’s Yunnan Province, which borders Wa territory. The provision of assistance through China would require the approval of the government of China. However, according to U.S. government officials, the Chinese would strongly disapprove of such involvement for several reasons. One of these reasons is that the United States has not returned a Chinese drug trafficker witness to China after the Chinese government released him to U.S. law enforcement officials for testimony in a U.S. domestic drug case. U.S. officials want to return him but cannot until his appeal for asylum in a U.S. court is resolved. In addition, U.S. government officials stated that it is unlikely that China would allow the U.S. government or nongovernmental organizations’ officials to implement programs from a base of operations in China. Wa territory shares no common border with Thailand, and any attempt to assist the Wa people through Thailand would involve operating in the southern Shan State area of Burma, which is not under SLORC control. However, U.S. government officials told us that the government of Thailand would not be willing to risk its sensitive relations with SLORC by permitting cross-border counternarcotics assistance to the Wa people through Thailand. In 1993, the Wa people proposed to the international community that they would cease opium production in exchange for receiving economic and development assistance while the Wa people transitioned from an opium-based economy to one based on other sources of income. According to U.S. officials, however, the proposal is not a genuine offer because UWSA, a drug-trafficking army, which has almost complete authority and control over the people within Wa territory, would not agree to participate in stopping opium cultivation and production from taking place. Without UWSA consent, the proposal could not be implemented. As such, the proposal has not been acted upon. For decades, there was considerable fighting between Burmese government military forces and CPB, many of whose members were Wa. In 1989, the two parties agreed to a 10-year cease-fire. The autonomy provided in the agreement has had the effect of allowing the Wa people to cultivate and process opium without SLORC interference. The agreement also includes a SLORC commitment to provide development assistance in Wa territory. In exchange, the Wa people agreed to halt their active insurgency against SLORC. However, because of the long-standing dislike and distrust between SLORC and Wa, both parties have undertaken a large-scale and costly arms buildup. In order to equip and maintain its military force, UWSA depends on funds generated from taxes on opium that is produced by Wa farmers and from taxes on heroin refining. Without these tax revenues, UWSA would have serious funding problems. Since 1989, opium production in Wa territory has more than doubled at the encouragement of UWSA in order to support UWSA forces. UWSA has no incentive to reduce its size or end its involvement in heroin trafficking until alternative sources of income are found to replace drug-generated revenues or the threat of SLORC aggression is diminished or removed. Neither of these possibilities appears likely at the present time. The following are GAO’s comments on ONDCP’s letter dated January 25, 1996. 1. We have made appropriate technical changes and the report has been updated to reflect recent developments in Burma. 2. The political realities included the Burmese government’s desire to reach accommodation with ethnic minorities. As part of this strategy, the Burmese government entered into a truce agreement with the Wa and other ethnic minority groups that controlled most of the opium poppy cultivation regions in Burma. These factors, as well as the limited resources of the Burmese government are fully discussed in this report. 3. While the Burmese government has recently entered a cease-fire agreement with a prominent armed drug-trafficking group, the Shan United Army (SUA), it is still unclear whether this will significantly affect the heroin trade in Burma or whether other groups like the Wa will assume control of SUA production and trafficking activities. Moreover, the Burmese government does not control Wa territory, the location of 80 percent of opium poppy cultivation in Burma. Furthermore, we agree that unless the Burmese government has the economic capability to foster alternative means of livelihood, it is doubtful that gaining control will, in and of itself, significantly reduce opium poppy cultivation areas. 4. The Burmese government has not made a commitment to end the drug trade and economic factors alone were not responsible for this lack of government commitment. Over the past 8 years, the primary political objective of the Burmese government was to consolidate its power in Rangoon. To accomplish this consolidation, it entered into truce agreements with ethnic minority groups responsible for opium cultivation and production resulting in the doubling of opium production. 5. Even though ONDCP states this, the U.S. government continues to support an expanded UNDCP opium drug reduction program. 6. This report and appendix I provides a detailed discussion on the feasibility of providing direct U.S. assistance to the Wa people. The following are GAO’s comments on the Department of State’s letter dated January 23, 1996. 1. We have made appropriate technical changes to the report and updated the section discussing SUA to reflect the recent cease-fire agreement between the SUA and Burmese authorities. 2. The reference to decertification has been deleted from the final report. We have changed the report to note that executive policy emphasizing human rights concerns and the Burmese government’s failure to recognize the democratic process were the reasons for eliminating direct U.S. counternarcotics funding. 3. We understand that this issue is very complex and involves the willingness of the United States to provide assistance to the Burmese government and the reaction that various elements of the Wa leadership would have to a central government that improved its human rights practices. Also, the Department of State appears to be modifying the position it took in testimony before Congress in July 1995 when it stated that the United States will be in a stronger position to make real gains at reducing the Southeast Asian heroin threat if there is progress on U.S. human rights and democracy concerns. 4. While the Burmese government and UWSA have reached a cease-fire agreement, the long-standing dislike and distrust between the Burmese government and Wa has resulted in both parties undertaking a large-scale and costly arms build-up. It is doubtful that the current regime will ever be able to convince ethnic minorities that their autonomy will be secure without having their own military capability to deter Burmese government aggression. While a democratically elected government also poses a potential threat to autonomy of ethnic groups, it may stand a better chance to reach a peaceful accommodation with the Wa military, especially if it offers economic incentives supported by the international community. 5. The point of this section is not to describe Chinese counternarcotics law enforcement efforts, but to outline how their lack of cooperation in this area affects U.S. heroin control objectives in the region. Bilateral law enforcement cooperation, including counternarcotics intelligence information sharing, is a key element of U.S. efforts. Without improvements in cooperation, DEA will encounter significant obstacles in interdicting important heroin-trafficking routes in southern China and assisting the Chinese in improving their counternarcotics law enforcement capability. The following is GAO’s comment on DEA’s letter dated January 24, 1996. 1. We have made appropriate technical changes to the report. We have also made changes regarding recent developments in Burma based on discussions with Department of State officials. To obtain information for this report, we spoke with appropriate officials and obtained documents in Washington, D.C., from ONDCP, DEA, and the Departments of State and Defense. We also discussed counternarcotics issues with officials of several non-governmental organizations and a representative of Burma’s Wa people. At the Joint Interagency Task Force-West in Alameda, California, we collected information on Department of Defense support for U.S. counternarcotics efforts in Southeast Asia. At the U.S. embassy in Bangkok, Thailand, we interviewed the Ambassador; Deputy Chief of Mission; and responsible officials from the Narcotics Affairs, Political, Economic, and Consular Sections; the Defense Attache Office; DEA; the Federal Bureau of Investigation; the Immigration and Naturalization Service; the U.S. Customs Service; the Agency for International Development; and the United States Information Service. To examine and evaluate U.S. heroin control efforts, we reviewed documents prepared by U.S. embassy personnel and supplemented the information in interviews with U.S. officials. We also met with the Consul General and DEA attache at the U.S. consulate in Chiang Mai. To obtain the views of the Thai government, we spoke with officials from Thai counternarcotics agencies, including the Office of the Narcotics Control Board and the Royal Thai Police Narcotics Suppression Bureau. To discuss multilateral drug control efforts in Southeast Asia, we met with officials from the UNDCP’s regional office in Bangkok. We also discussed these issues with officials at the Australian and British embassies in Bangkok. At the U.S. embassy in Rangoon, Burma, we interviewed the Charge d’ Affaires, the Deputy Chief of Mission, and responsible officials from the Political Section, the Defense Attache Office, DEA, and the United States Information Service. To examine and evaluate U.S. heroin control efforts, we reviewed documents prepared by U.S. embassy personnel and supplemented the information in interviews with U.S. officials. We also discussed the status of multilateral projects in Burma with appropriate UNDCP officials. Finally, we met with officials at the Australian and Japanese embassies in Rangoon to discuss their counternarcotics programs. At the U.S. consulate in Hong Kong, we interviewed the Consul General, the Deputy Principal Officer, and responsible officials from the Political and Consular Affairs Sections, the Defense Liaison Office, DEA, the Federal Bureau of Investigation, the Immigration and Naturalization Service, and the U.S. Customs Service. To examine and evaluate U.S. heroin control efforts, we reviewed documents prepared by U.S. embassy personnel and supplemented the information in interviews with U.S. officials. We also met with officials of the Royal Hong Kong Police and the Hong Kong Customs and Excise Department to discuss their heroin interdiction and anti-money laundering activities. We provided a draft of this report to officials from the Departments of State and Defense, the Drug Enforcement Administration, and the Office of National Drug Control Policy and discussed it with them. The Department of State, ONDCP, and DEA provided formal written comments. The Department of Defense did not provide written comments but fully concurred with our findings. Louis Zanardi Allen Fleener Dennis Richards George A. Taylor Daniel J. Tikvart Steven K. Westley The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed U.S. efforts to prevent heroin trafficking, focusing on the: (1) extent to which heroin poses a threat to the United States; (2) impediments to heroin control efforts in Southeast Asia; and (3) United Nations Drug Control Program's (UNDCP) effectiveness in Burma. GAO found that: (1) while heroin is not the primary illegal narcotic in use in the United States, heroin production, trafficking, and consumption are growing threats; (2) since the late 1980s, worldwide production of opium has nearly doubled, and U.S. emergency room episodes resulting from heroin overdoses have increased by 50 percent; (3) although U.S. heroin control programs in Southeast Asian countries other than Burma have had some limited success, U.S. efforts have not reduced the flow of heroin from the region because producers and traffickers shift transportation routes and growing areas into countries with inadequate law enforcement capability or political will; (4) in 1994, Burma accounted for about 87 percent of the opium cultivated in Southeast Asia and approximately 94 percent of the opium production in the region, thus, a key to stopping the flow of heroin from Southeast Asia is addressing opium production in Burma; and (5) there are several reasons why achieving this objective will be difficult: (a) since 1988, the U.S. has not provided eradication assistance to the Burmese government because it violently suppressed a pro-democracy movement, began establishing a record of human rights abuses, and refused to recognize the results of national elections in 1990 that removed the military government from power; (b) because of the complex Burmese political environment, U.S. assistance is unlikely to be effective until the Burmese government demonstrates improvement in its democracy and human rights policies and proves its legitimacy to ethnic minority groups in opium-producing areas; (c) the Burmese government is unable or unwilling to make a serious commitment to ending the lucrative drug trade and is unlikely to gain the required political support to control most of the opium cultivation and heroin-trafficking areas within Burma; (d) while heroin control efforts in Thailand and Hong Kong have achieved some positive results, there has been little counternarcotics cooperation with China, where important regional drug-trafficking routes have recently emerged; and (e) UNDCP's crop control, alternative development, and demand reduction projects in Burma are too small in scale to significantly affect opium poppy cultivation and opium production levels.
The DI program was established in 1956 to provide monthly cash benefits to individuals unable to work because of severe long-term disability. To meet the definition of disability under the DI program, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least one year or to result in death and (2) prevents the individual from engaging in substantial gainful activity (SGA). In addition, to be eligible for benefits, workers with disabilities must have a specified number of recent work credits under Social Security when they acquired a disability. Spouses and children of workers may also receive benefits. Benefits are financed by payroll taxes paid into the DI Trust Fund by covered workers and their employers, and the benefit amount is based on a worker’s earnings history. In November 2014, the program’s average monthly benefit for disabled workers was about $1,146. Historically, very few DI beneficiaries have left the program to return to work. To encourage work, the DI program offers various work incentives to reduce the risk a beneficiary faces in trading guaranteed monthly income and subsidized health coverage for the uncertainties of employment—including a trial work period, and an extended period of eligibility for DI benefits. These incentives safeguard cash and health benefits while a beneficiary tries to return to work. For example, the trial work period allows DI beneficiaries to work for a limited time without their earnings affecting their disability benefits. Each month in which earnings are more than $780 is counted as a month of the trial work period. When the beneficiary has accumulated 9 such months (not necessarily consecutive) within a period of 60 consecutive months, the trial work period is completed. The extended period of eligibility begins the month after the trial work period ends, during which a beneficiary is entitled to benefits so long as he or she continues to meet the definition of disability and his or her earnings fall below the SGA monthly earnings limit. SSA regulations require all DI beneficiaries to promptly notify SSA when: their condition improves, they return to work, or they increase the amount they work or their earnings. Program guidance directs DI beneficiaries to report to SSA right away if work starts or stops; if duties, hours or pay change; or they stop paying for items or services needed for work due to a disability. Beneficiaries may report work by fax, mail, phone, or in person at an SSA field office. SSA staff are required by law and regulation to issue a receipt acknowledging that the beneficiary (or representative) has given SSA information about a change in work or earnings, and documenting the date that SSA received the work report. After receiving information about work activity or a pay stub from a beneficiary, SSA staff have five days to input the information into the system—which creates a pending work report or pay stub report—and hand or mail a receipt to the beneficiary. Staff then have an additional 30 days to review the pending work report to determine if an additional action, such as a work continuing disability review (CDR), is needed to assess the beneficiary’s continued eligibility for DI benefits. See figure 1. SSA processes over 100,000 work reports or pay stubs annually. Benefit overpayments can occur when beneficiaries do not report work or SSA does not take action on work reports in an appropriate or timely manner. When a DI work-related overpayment is identified, the beneficiary is notified of the overpayment and may request reconsideration or waiver of that overpayment. SSA may grant a waiver request if the agency finds the beneficiary was not at fault AND recovery or adjustment would either defeat the purpose of the program or be against equity and good conscience, as defined by SSA. SSA’s DI cumulative overpayment debt has almost doubled over the last decade, growing from $3.2 billion at the end of fiscal year 2004 to $6.3 billion at the end of fiscal year 2014, according to SSA data. Cumulative overpayment debt is comprised of existing debt carried forward from prior years, new debt, reestablished debts (debts reactivated for collection due to re-entitlement or another event) and adjustments, minus debts that are collected or written off by SSA. Cumulative DI overpayment debt has continued to grow because in nine of the last ten years the debt added exceeded the total debt collected and written off. Specifically, over the 10 years reviewed, SSA added about $15.4 billion in debt, while collecting and writing-off $12.3 billion. According to preliminary data provided by SSA, the agency overpaid DI beneficiaries a total of about $20 billion during fiscal years 2005 through 2014, and more than half of this total ($11 billion) was a result of beneficiaries’ work-related earnings exceeding program limits. According to these data, each fiscal year an average of about 96,000 DI beneficiaries (or 28 percent of all beneficiaries overpaid each year) received excess benefits totaling $1.1 billion because their work activity exceeded program limits. The average work-related overpayment per beneficiary was almost $12,000 during this time period, ranging from $10,456 in fiscal year 2014 to $14,208 in fiscal year 2011. We are continuing to assess the reliability of these data as part of our ongoing work. SSA’s annual stewardship reviews provide limited insight into the causes of overpayments. Stewardship reviews are based on a sample of cases, and are used by the agency to report on the accuracy of benefit payments. In its stewardship reports, SSA uses the term deficiency dollars to quantify the effect of each individual deficiency in a case which could cause an improper payment. In its last six stewardship reports, SSA reported that deficiency dollars related to beneficiaries’ incomes being above DI program limits were consistently a leading cause of improper overpayments in the DI program. SSA also attributed some of these deficiencies to not taking appropriate or timely action to adjust payments when it was notified of beneficiaries’ work activity. However, GAO has not yet fully evaluated SSA’s methodology for conducting these reviews. Based on our discussions with SSA staff in field offices and teleservice centers, we identified a number of situations where beneficiaries report work or earnings, but staff may not enter information into the system, which is inconsistent with federal internal control standards, or may not provide a receipt, as mandated by law. Whether DI beneficiaries report work information in person or by fax, mail, or telephone to SSA field offices or the agency’s 800 teleservice line, in accordance with procedures, staff must manually enter the information into the system to initiate tracking and issue a receipt. Specifically, SSA representatives have five days to manually enter the information into the eWork system, which also generates a receipt to be mailed or given to the beneficiary. Issuing a receipt is required by law and valuable to the beneficiary for two reasons: (1) the beneficiary can review the receipt to ensure that the information is correct; and (2) a beneficiary who later receives an overpayment can produce work report receipts to prove that he/she properly reported work activity. This system also tracks pending work reports to ensure completion within 30 days. Tracking is critical for ensuring SSA promptly processes the work report and takes the actions needed to adjust a beneficiary’s benefits and minimize the chance of overpayments. However, in our work at several locations, SSA staff told us that if the eWork system is unavailable, or if the representative is busy, he or she may not enter the information and issue a receipt to the beneficiary. In addition, at one location, we learned that, until recently, SSA teleservice staff were using an alternate approach for sending work reports to the field office for manual entry and processing, instead of directly entering the information into the eWork system themselves. Work reports handled this way lack the controls in eWork; for example, they are not automatically tracked against the 30-day goal for work report completion. As such, they can be more easily missed or overlooked, and could be deleted or marked as completed without action being taken. Finally, claims representatives in the field office may also bypass the work report process entirely and initiate a work continuing disability review (CDR) instead. Some SSA claims representatives we interviewed told us that they skip the work report step and do a CDR instead because it is more efficient, but this means that the beneficiary does not receive a receipt. Stakeholder groups we interviewed have also observed problems with receipts, but SSA has limited data to assess this and other vulnerabilities in the work reporting process. In particular, stakeholders said that beneficiaries they work with do not always receive receipts, especially when reporting work by calling the 800 teleservice line. However, SSA’s ability to determine the extent of these vulnerabilities is hindered, in part, due to data limitations. SSA’s eWork system does not capture data that would help the agency determine how many work reports are filed by fax, mail, or in person. This system also does not allow SSA to determine how often staff go directly to a CDR without first completing a work report and issuing a receipt. Moreover, while SSA’s system archives copies of printed receipts, it does not provide aggregate data on receipts provided. So even though SSA officials noted that local offices have procedures in place to ensure the timely processing of information received by mail or fax, data limitations prevent SSA from knowing the extent to which receipts are provided within five days. Further, according to SSA officials, determining the extent to which 800 teleservice staff might be using alternative approaches for sending work reports to field offices would require a significant effort to match data between two different systems. Although the agency monitors work reports for timeliness, SSA lacks guidance for processing work reports through completion, and monitoring them for quality. SSA has set a 30-day time frame for staff to screen pending work reports, and decide whether further action is required in light of the information in the work report, or whether the work report can be closed without additional action. Field office managers who oversee field office workloads have access to management information showing the number and age of pending work reports, and those we interviewed indicated that they follow up on pending work reports approaching the 30- day timeframe to ensure timely processing. However, the agency has not established policies or procedures detailing the steps staff must take in screening these reports. Federal internal control standards state that agencies’ policies and procedures should be clearly documented in administrative policies or operating manuals. Without explicit policies or procedures on how to screen a work report—that is, how to evaluate whether it should be closed or referred to a work CDR to determine whether the beneficiary’s benefits should be adjusted—there is an increased risk that a report could be improperly closed, and result in a beneficiary being overpaid. SSA also lacks guidance and processes for ensuring the accuracy and quality of its work report decisions. In our work at several field locations, we did not identify any processes that would have either assessed the accuracy or quality of the screening decision, or provide feedback to staff on how to improve their decision making. In accordance with federal internal control standards, agencies should assure that ongoing monitoring occurs in the course of normal operations, and assess the quality of performance over time. The absence of oversight and feedback increases the risk that the agency may not identify errors with work report decisions in a timely manner. SSA does not offer automated reporting options for DI beneficiaries — similar to those currently used in SSA’s Supplemental Security Income (SSI) program— even though such options could address vulnerabilities we identified. According to SSA officials, SSA first piloted a telephone wage reporting system for SSI beneficiaries in 2003, and has used it nationally since 2008. In 2013, the agency also rolled out a mobile smartphone application for reporting work activity for SSI. Unlike the DI program’s manual process, both of these SSI reporting options assist with agency tracking and issue receipts to the beneficiary without staff intervention. SSA has also noted that these automated reporting tools make reporting easier and more convenient for beneficiaries, and reduce field office workloads. SSA reported that it processed over 44,000 SSI telephone wage reports in September 2013, surpassing its fiscal year 2013 goal of 38,510 reports per month. In September 2013, the agency also received over 5,100 wage reports through its smartphone application. SSA continues to promote these methods and has stated that expanded use of automated reporting should help reduce improper payments in the SSI program. Despite potential benefits to the DI program, SSA officials told us the agency has not used SSI reporting systems for DI beneficiaries. In October 2010, SSA created a work group to begin exploring the development of a telephone reporting system for the DI program but, according to SSA officials, the project was discontinued in February 2011—after developing cost estimates for one year of development—due to lack of resources. They also told us these efforts were not resumed because the automated reporting in the DI program would not have the same return on investment as in the SSI program, due to the complexity of DI program rules. For example, officials stated determinations concerning DI work incentives—determinations that are currently a part of the work CDR process, not the DI work reporting process—cannot be easily automated. SSA officials also stated that they currently favor using the www.mysocialsecurity.gov portal as the best approach for providing automated reporting options to DI beneficiaries. However, they did not provide any information on plans, timelines or costs associated with implementing such an approach. In the meantime, the current, manual DI work activity reporting options leave the process more vulnerable to error, provide less proof of beneficiaries’ due diligence, and subject beneficiaries to less convenient reporting mechanisms. Overpayments may arise because of unclear work reporting requirements and staff’s differing interpretations of complex DI program rules. For example, SSA’s regulations and its policy manual both state that DI beneficiaries should “promptly” report changes to work activity, but SSA has not defined this term, leaving this open to interpretation by both beneficiaries and SSA staff. Similarly, in its pamphlet “Working While Disabled,” beneficiaries are instructed to report changes in their work “right away.” However it does not prescribe a time period or frequency of reporting. During our site visits, we found variation in how staff instructed beneficiaries to report. For example, some staff said they instruct beneficiaries to report monthly, regardless of whether there are changes in their work, which is similar to the SSI program’s wage-reporting requirements. Others told us they tell beneficiaries to report 10 days after any change, which is also similar to another SSI reporting requirement. One staff person indicated that she instructs beneficiaries not to bother reporting earnings under $15,780 per year, even though this earnings limit applies to those receiving Social Security retirement benefits, not DI. Thus a DI beneficiary who relied on such information could incur an overpayment. According to federal internal control standards, federal agencies should ensure that pertinent information is distributed to the right people in sufficient detail and at the appropriate time to enable them to carry out their duties and responsibilities efficiently and effectively. Further, our preliminary findings suggest that some SSA staff do not fully understand DI’s complex work incentive rules. Service representatives who take work reports through SSA’s 800 teleservice line or at the window in an SSA office are generally less highly trained or specialized in their knowledge about work incentives and may not always provide accurate information. For example, several staff we spoke with confused the trial work period earnings threshold with substantial gainful activity (SGA) earnings limits. Such a mistake might result in beneficiaries— who, for example, plan to return to work—being told not to report earnings that they should be reporting. Stakeholder groups we spoke with cited similar examples of SSA staff providing beneficiaries with incorrect information on work incentives. SSA officials told us that in fiscal year 2013, the agency sampled calls received on its 800 teleservice line for quality review purposes, and found that calls regarding disabled work activity represented only 1 percent of the total call workload, but 2.3 percent of all errors identified. Several SSA managers we spoke with said that training could be enhanced for those staff answering calls on SSA’s 800 teleservice line. SSA has developed a proposal to reduce complexity in the DI program, but has not tested or implemented this proposal to date. In its fiscal year 2012 budget request, SSA proposed the Work Incentives Simplification Pilot (WISP), to test a streamlined approach to evaluating DI Program work activity and reduce administrative workloads by making it simpler and less time-consuming for staff to verify earnings and validate benefits. It was also intended to reduce improper payments and eliminate rules that confuse beneficiaries, such as different definitions for income for the DI versus SSI program. Ultimately the agency hopes such an effort will reduce incidences of overpayments that may serve as a disincentive to DI beneficiaries who wish to work. SSA convened a Technical Advisory Panel to design a demonstration of WISP, which issued a report with recommendations in 2012 but also noted that the agency lacks authority to implement the proposed demonstration. However, the report also noted that SSA could conduct a pre-test to inform a large demonstration. This is an issue we will continue to explore in our ongoing work. Despite the importance and challenges associated with work reporting, SSA provides beneficiaries with infrequent reminders, and those reminders it does provide contain limited information about potential liability for overpayments. GAO’s internal control standards state that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. SSA currently informs beneficiaries of reporting requirements when their benefit claim is initially approved; although it could be many years before a beneficiary returns to work. Nevertheless, one SSA representative/ manager indicated that the page signed by beneficiaries when they are initially approved for benefits could specifically include information about work reporting requirements, which would make it more difficult for beneficiaries who incur an overpayment to claim that they were unaware of their reporting responsibilities. SSA also sends an annual letter to beneficiaries regarding cost-of-living adjustments to their benefits that includes a reminder of their reporting responsibilities; however, several staff indicated that additional reminders would prompt more beneficiaries to report work. In contrast, in fiscal year 2014, SSA began providing a web-based service designed to prompt SSI beneficiaries to report wages, using notices, emails and reminders—an option not currently available for DI beneficiaries. SSA officials stated that the agency does not have near-term plans to provide additional notices to DI beneficiaries to encourage work reporting. Finally, although the initial application and annual letter mention potential liability for overpayments for beneficiaries who fail to report work, SSA’s “Working While Disabled” pamphlet—which contains details about work incentives and is provided to beneficiaries who contact SSA about work—does not explain circumstances under which a beneficiary could be found liable for an overpayment. Some SSA staff we spoke with said they tell beneficiaries not to spend benefit checks or deposits that they believe were sent in error. However one stakeholder group we spoke with said that many beneficiaries mistakenly believe that, if they diligently report work and still receive benefits, then they must be entitled to those benefits. We will continue to assess the issues discussed in this statement and will report our final results later this year. Chairman Johnson, Ranking Member Becerra, and members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff members who made key contributions to this testimony are Michele Grgich (Assistant Director), James Bennett, Daniel Concepcion, Julie DeVault, Dana Hopings, Arthur Merriam, Jean McSween, Ruben Montes de Oca, James Rebbe, Martin Scire, Charlie Willson, and Jill Yost. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
SSA's DI program is one of the nation's largest cash assistance programs. To ensure that beneficiaries remain eligible, SSA regulations require that beneficiaries promptly report their work activity—including starting a job or a change in wages—to the agency in a timely manner. If the beneficiary does not report changes or if SSA does not properly process reported work information, SSA may pay out benefits in excess of what is due, resulting in an overpayment. In fiscal year 2014, SSA identified $1.3 billion in DI benefit overpayments. Avoiding overpayments is imperative as they pose a burden for beneficiaries who must repay excess benefits and result in the loss of taxpayer dollars when they cannot be repaid. In this statement based on ongoing work, GAO discusses preliminary observations regarding: 1) what is known about the extent of work-related DI overpayments; and 2) factors affecting SSA's handling of work activity reported by beneficiaries. GAO reviewed relevant federal laws, policies, and procedures, and prior GAO, OIG and SSA reports; analyzed 10 years of SSA data on overpayments; interviewed staff at SSA headquarters and at field offices and teleservice centers for three regions, selected to represent a range of relevant DI workloads. Over the last decade, preliminary data provided by the Social Security Administration (SSA) indicate that more than half of the $20 billion overpaid in the Disability Insurance (DI) program was associated with beneficiary work activity. Specifically, SSA's data indicate that between fiscal years 2005 and 2014, a total of $11 billion in DI overpayments were paid to beneficiaries with work earnings that exceeded program limits, with an annual average of 96,000 DI beneficiaries incurring an average work-related overpayment of $12,000. In its last 6 annual stewardship reports, SSA attributed some improper payments to its not taking appropriate action when notified of beneficiaries' work activity. GAO identified a number of factors that affect handling of work activity reports by beneficiaries—factors that stem from weaknesses in SSA's policies and procedures that are inconsistent with federal internal control standards. Such weaknesses increase the risk that overpayments may occur even when DI beneficiaries diligently try to follow program rules and report work and earnings. These weaknesses include: Vulnerabilities in processing work reports. Based on interviews with SSA staff, GAO identified process vulnerabilities that could result in staff not: (1) issuing a receipt that proves the beneficiary's work was reported—one of two criteria a beneficiary must meet for SSA to waive an overpayment; and (2) initiating tracking of work activity, which would help prevent overpayments. Data are not available to determine the extent to which this might occur. Limited guidance for processing and monitoring work reports. While SSA has metrics to ensure that staff take action on work reports in a timely manner, it lacks procedures detailing steps staff must take in screening these reports and for ensuring that pending work reports are systematically reviewed and closed with appropriate action, consistent with federal internal control standards. Not leveraging technology. In contrast to SSA's Supplemental Security Income (SSI) program—a means-tested disability benefits program—the DI program lacks automated tools for beneficiaries to report work. SSI recipients can report wages through an automated telephone reporting system and a smartphone app. SSA cited complex DI program rules and an unclear return on investment for not pursuing these options. However, this conclusion was based on a limited evaluation of costs. Meanwhile, SSA's current manual approach is vulnerable to error and may discourage reporting by beneficiaries who experience long wait times when they try to report work in person at offices or by telephone. Confusing work incentive rules. The DI program has complex work incentive rules, such that SSA staff interviewed by GAO had varying interpretations of program rules and gave beneficiaries differing instructions on how often to report their work and earnings. In 2012, SSA developed a proposal to simplify program rules, but stated that it does not currently have the authority to test or implement such changes. SSA requested authority that would allow it to conduct such tests in its 2016 budget proposal. As GAO finalizes its work for issuance later this year, it will consider making recommendations, as appropriate. GAO sought SSA's views on information included in this statement, but SSA was unable to provide its views in time to be incorporated.
Army Industrial Operations provides services for a variety of customers, including the Army, the Navy, the Air Force, non-DOD agencies, and foreign countries. The majority of the work is for the Army. Industrial Operations relies on sales revenue from customers to finance its continuing operations. Operating under the working capital fund concept, Industrial Operations is intended to (1) generate sufficient resources to cover the full costs of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers, such as the Army, use appropriated funds (including operation and maintenance or procurement appropriations) to finance orders placed with Industrial Operations. Industrial Operations provides the Army an in- house industrial capability to (1) conduct depot-level maintenance, repair, and upgrade; (2) produce munitions and large-caliber weapons; and (3) store, maintain, and demilitarize material for DOD. Industrial Operations comprises 13 government-owned and operated installation activities, each with unique core competencies. These include five maintenance depots (Anniston, Alabama; Corpus Christi, Texas; Letterkenny, Pennsylvania; Red River, Texas; and Tobyhanna, Pennsylvania), three arsenals (Pine Bluff, Arkansas; Rock Island, Illinois; and Watervliet, New York), two munitions production facilities (Crane, Indiana, and McAlester, Oklahoma), and three storage sites (Blue Grass, Kentucky; Sierra, California; and Tooele, Utah). The preponderance of the workload performed by Industrial Operations relates to depot-level maintenance. Army Materiel Command (AMC) serves as the management command for Industrial Operations. Industrial Operations activities report under the direct command and control of the Army’s Life Cycle Management Commands (LCMC), each aligned in accordance with the nature of its mission. For example, the work performed at Anniston and Red River is aligned with the Army’s Tank, Automotive and Armaments Command LCMC mission of developing, acquiring, fielding, and sustaining ground systems, such as the HMMWV and Abrams tank, whereas the work performed at Letterkenny and Corpus Christi is aligned with the Army’s Aviation and Missile Command LCMC mission of developing, acquiring, fielding, and sustaining aviation, missile, and unmanned vehicle systems, such as the Patriot missile and Black Hawk helicopter. Carryover consists of both the unfinished portion of Army Industrial Operations work started but not completed and work that was accepted but has not yet begun. Some carryover is appropriate at the end of the fiscal year in order for working capital funds such as Industrial Operations to operate efficiently and effectively. For example, if customers do not receive new appropriations at the beginning of the fiscal year, carryover is necessary to ensure that Industrial Operations’ activities (1) have enough work to continue operations in the new fiscal year and (2) retain the appropriate number of personnel with sufficient skill sets to perform depot maintenance work. Too little carryover could result in some personnel not having work to perform at the beginning of the fiscal year. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year. By limiting the amount of carryover, DOD can use its resources in the most efficient and effective manner and minimize the backlog of work and “banking” of related funding for subsequent years. DOD’s Financial Management Regulation 7000.14-R, volume 2B, chapter 9, provides that the allowable amount of carryover each year is to be based on the amount of new orders received that year and the outlay rate of the customers’ appropriations financing the work. The DOD carryover policy further provides that the work on the current fiscal year’s orders is expected to be completed by the end of the following fiscal year. DOD’s Financial Management Regulation also provides that (1) nonfederal orders, non-DOD orders, foreign military sales, work related to base realignment and closure, and work-in-progress are to be excluded from the carryover calculation and (2) the reported actual carryover, net of exclusions (adjusted carryover), is then compared to the amount of allowable carryover using the above-described outlay rate method to determine whether the actual carryover amount is over or under the allowable carryover amount. To the extent that adjusted carryover exceeded the allowable carryover, DOD and the congressional defense committees may reduce future budgets. According to DOD Financial Management Regulation, this carryover policy allows for an analytical- based approach that holds working capital fund activities to the same outlay standard as the general fund and allows for meaningful budget execution analysis. Requests for exceptions to the carryover policy (i.e., waivers) must be submitted to the Director for Revolving Funds, OUSD (Comptroller) separate from the budget documents. OUSD (Comptroller) officials informed us that they review requests for exceptions to the carryover policy on a case-by-case basis. Depending on the request, they may ask for additional information to evaluate the request. The Army implemented the Logistics Modernization Program (LMP) at two Army Industrial Operations activities in fiscal year 2009 and 10 Army Industrial Operations activities in fiscal year 2011. According to the Army’s budget, LMP provides the Army a modernized logistics and finance system that delivers a fully integrated suite of software and business processes, providing streamlined data on maintenance, repair and overhaul, finance, acquisition, spare parts, and materiel. LMP changed the point in time when Industrial Operations activities recognized revenue. The point in time when revenue is recognized is important because when an Industrial Operations activity performs work it earns revenue and the carryover is reduced. Prior to the implementation of LMP, the Army activities recognized revenue on parts and material when the activities received the items and assigned them to orders. This procedure led, in some cases, to Industrial Operations activities buying material or spare parts and recognizing revenue and reducing carryover before the parts and material were actually used in repairing weapon systems. Under LMP, revenue is recognized when the material and parts are brought to the assembly area for installation on the weapon systems—much later in the repair process for weapon systems that have long repair cycle times. From fiscal years 2006 through 2012, the Army reported that Industrial Operations’ actual carryover, adjusted for waivers/exclusions (adjusted carryover), was under the allowable amounts in 5 of the 7 fiscal years. From fiscal years 2006 through 2012, Industrial Operations’ total actual carryover increased from $2.3 billion to $5 billion, reaching a high of $5.8 billion—12.7 months of work—at the end of fiscal year 2011.1 shows the Army Industrial Operations actual adjusted carryover, allowable carryover, and the amount over (or under) the allowable carryover for fiscal years 2006 through 2012. The Army’s budget estimates for its Industrial Operations carryover were consistently less than the actual carryover amounts each year from fiscal years 2006 through 2012. For the 7-year period, the Industrial Operations actual carryover exceeded budgeted carryover by at least $1.1 billion each year. This was primarily because (1) the Army underestimated its Industrial Operations new orders received from customers for each of the 7 years and (2) for fiscal year 2011, Industrial Operations performed over $1 billion less work than budgeted. Reliable budget information on carryover is critical because decision makers use this information when reviewing Industrial Operations budgets. Table 2 compares the dollar amounts of the Army’s budgeted and actual Industrial Operations carryover and the difference between these amounts for fiscal years 2006 through 2012. One factor we found that contributed to actual carryover exceeding budgeted carryover by over $1 billion annually over the 7-year period was that the Army significantly underestimated the amount of new orders to be received from its Industrial Operations customers. As shown in table 3, from fiscal years 2006 through 2012, Army Industrial Operations budgeted to receive about $35.6 billion in new orders, but Industrial Operations reports showed that it actually received about $45.7 billion in new orders. As a result, Industrial Operations underestimated new orders received from customers by about $10.1 billion over the 7-year period. We analyzed the appropriations funding new orders to determine which appropriation had the largest variance. The Army’s total budgeted new order amounts for the 7-year period were within 15 percent of the total actual new order amounts for the operation and maintenance and other appropriations categories. The largest differences for these two appropriation categories occurred in fiscal years 2006 and 2007, when Industrial Operations budget assumptions in support of the Global War on Terrorism underestimated the amount of orders actually received.However, over the same period, our analysis of budgeted and actual orders showed that actual new orders funded by the procurement appropriation category exceeded budgeted new orders by about $5.8 billion, or 118 percent. Actual amounts funded by the procurement appropriation category exceeded budgeted amounts by over 50 percent in all but one year, with actual amounts exceeding budgeted amounts by more than 100 percent in 4 of the 7 years. Table 4 shows Army Industrial Operations’ budgeted new orders compared to actual new orders by appropriation category funding the orders for fiscal years 2006 through 2012. Army headquarters and AMC officials stated that they recognized that Army Industrial Operations had difficulty in accurately budgeting for new orders, particularly for procurement-funded orders and carryover. In discussing this matter with Army headquarters officials, the officials stated that Industrial Operations develops its budgets based on information from customers. However, Industrial Operations underestimated the amount of new orders because (1) the customers did not always notify Industrial Operations of their plans to provide some orders, (2) the customers did not always commit to providing some orders, and (3) customer requirements subsequently changed from the time they prepared their budgets to the time the orders were placed with Industrial Operations. The Army officials we spoke with stated that improved communications between Industrial Operations and customers is needed to help better ensure that budgeted orders approximate actual orders. Specifically, they stated that customers and Industrial Operations need to work together so that Industrial Operations receives reliable new order information to be included in its budgets. To improve the management of carryover, the Army formed a working group in April 2012. Among other things, the working group identified that improved planning and communication was needed on budgeting for orders. The working group identified a number of actions that have the potential for remedying the budgeting and actual carryover and order variances, including the following: Holding a series of monthly or quarterly meetings to better manage carryover, including issues related to orders received from customers. For example, beginning in July 2012, AMC and the LCMCs and their individual activities began to hold quarterly meetings that provide information on the status of Industrial Operations’ carryover, orders, and revenue. Production issues for specific workloads and strategies to reduce carryover are discussed at these meetings. Strategies aimed at increasing revenue and reducing carryover include working a second shift at the activities or actions to obtain long lead time parts. Establishing a policy on acceptance of unscheduled new orders. Requiring the program managers (customers) to clearly identify planned depot work in their procurement budgets so Industrial Operations can better determine the dollar amount of budgeted orders funded with procurement appropriations. In addition, AMC has taken or plans to take the following actions intended to improve the management and budgeting of Industrial Operations’ carryover and orders. AMC reviewed orders received by Industrial Operations from customers in the fourth quarter of fiscal year 2012 and disapproved some orders that were unplanned and not included in the Industrial Operations fiscal year 2012 budget because the orders would increase fiscal year 2012 carryover. Specifically, AMC disapproved $97 million of orders received from Industrial Operations’ customers. During fiscal year 2013, the Army plans to better align the customers’ budgets with the Industrial Operations budgets. AMC identified three points during the budget and requirements process at which budget information on orders can be updated with the most current workload data. At these points, AMC, the LCMCs, and the Industrial Operations activities will meet with their customers, including the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology), the Army National Guard, and the Army Reserve, and review the customers’ requirements and update the Industrial Operations budgets for any orders to be received. If fully and effectively implemented, these actions should help address the order and carryover budgeting issues. However, the Army has not yet developed a timetable for implementing the actions identified by the working group. Our analysis of Army Industrial Operations budget documents showed that Industrial Operations earned less revenue than budgeted in fiscal year 2011, contributing to actual carryover exceeding the budgeted amount by $2.5 billion for fiscal year 2011. Even though the total budgeted and actual revenue was nearly equal from fiscal years 2006 through 2012, Industrial Operations’ budget data also showed that Industrial Operations’ actual revenue for fiscal year 2011 fell below budgeted revenue by about $1 billion and below the previous year’s results by $688 million. Table 5 shows a comparison between budgeted and actual revenue for fiscal years 2006 through 2012. Army officials stated that the implementation of LMP at the Industrial Operations activities in fiscal years 2009 and 2011 contributed to the fiscal year 2011 actual revenue falling below the prior year and budgeted amounts. First, in the first quarter of fiscal year 2011, LMP was implemented at 10 of the 13 Industrial Operations activities. At the deployment activities, revenue was lower in fiscal year 2011 because of production delays caused by the workforce being unfamiliar with the revised LMP requirements. Second, when the fiscal year 2011 Industrial Operations budget was developed in the summer of 2009, 2 of the 13 Industrial Operations activities had just deployed LMP and 10 of the 13 had not yet deployed LMP. The LCMCs and the activities did not fully understand the impact LMP would have on revenue recognition at the time of their budget submissions, resulting in actual revenue being less than budgeted. Much of the growth in Army Industrial Operations’ carryover occurred in the past 2 fiscal years. During this period, carryover grew from $2.3 billion at the end of fiscal year 2006 to a high of $5.8 billion at the end of fiscal year 2011. Carryover grew in fiscal year 2011 because Industrial Operations received more orders ($7.5 billion) than work it performed ($5.5 billion). At the end of fiscal year 2012, carryover totaled about $5 billion. We analyzed eight workloads that accounted for $2 billion of the $5.8 billion in carryover at the end of fiscal year 2011. The carryover associated with these workloads represented about 35 percent of Industrial Operations’ carryover for fiscal year 2011. In analyzing these workloads, we found three primary causes for the carryover: (1) the scope of work was not well defined, (2) parts needed to perform the work were not available, and (3) revenue recognition business rules were changed as part of the implementation of LMP. As an additional cause, the Industrial Operations activities accepted some orders in the third and fourth quarters of fiscal year 2011, which provided them little time to resolve any scope of work or parts issues in fiscal year 2011. Table 6 summarizes the results of our analysis of the key causes of carryover related to the eight workloads we analyzed. In order for the Army Industrial Operations activities to perform work in a timely manner and minimize carryover, the activities should have a well- defined scope of work, including approved technical data and documented processes. Industrial Operations officials informed us that the lack of a well-defined scope of work was one of the causes of carryover. Our analysis of eight Industrial Operations workloads corroborated the information provided by Industrial Operations officials and found that work on two workloads was delayed and carryover increased because the scope of work was not well defined. For example, in July 2010, Letterkenny accepted its first order to convert 3 Mine Resistant Ambush Protected vehicles to a different operational purpose. The converted vehicle is referred to as a Route Clearance Under the accepted work order, Letterkenny was Vehicle (RCV).expected to design and engineer 3 prototype vehicles, establish a technical data package to convert the vehicles, and develop a statement of work and a bill of materials. The 3 vehicles were expected to be completed by December 2010—about 6 months later. However, the depot experienced problems designing the modified vehicle, reaching agreement with its customer on the new design, and performing tests on the first vehicle. The final technical data package to convert the additional vehicles and the development of the statement of work and the bill of materials were delayed until the new design was agreed upon and the first vehicle was tested. As the depot continued to work with its customer on the design of the 3 pilot vehicles, in July 2011, Letterkenny accepted its second order under this program for the production of 10 vehicles. One month later, the depot accepted a third order for over 300 production vehicles for $211 million. Because of the delays in completing the 3 prototype vehicles that were to identify the specifications of the work to be performed, the depot carried over about $211 million in work orders into fiscal year 2012. Moreover, the depot had a total of three pilot vehicles and 292 production vehicles on order at the end of fiscal year 2012. ordered was almost complete as of September 30, 2012. The depot carried over about $209 million in work orders into fiscal year 2013. A photo of a Mine Resistant Ambush Protected vehicle that is being converted to an RCV is shown in figure 2. The production vehicle quantities decreased from fiscal years 2011 through 2012 because the unit funded cost per vehicle increased. Without the right mix and sufficient quantity of spare parts, the Army Industrial Operations activities are impaired in their ability to complete their funded workloads in a timely and efficient manner. Industrial Operations officials informed us that parts shortages was one of the causes of carryover. Our analysis of Industrial Operations data corroborated these officials’ views and found that parts shortages at the activities contributed to carryover for seven of the eight workloads we reviewed. Parts were not available for work to proceed in a timely manner because (1) the depots accepted orders funded with procurement appropriations in fiscal year 2011 but contracts to purchase parts needed to perform the work were not awarded until fiscal years 2012 or 2013 and (2) the HMMWV work outpaced the ability of the supply chain to provide the parts. The delay in receipt of parts extended the time needed for the activities to complete work on the orders which, in turn, increased the amount of work that carried over into the following fiscal year or fiscal years. For example, as summarized in figure 3 and discussed in more detail below, parts shortages impaired Industrial Operations’ depots at Red River and Letterkenny from completing their HMMWV work orders. Our analysis of Army order and contract data associated with five workloads found that contracts for critical parts were not awarded until 1 or 2 fiscal years after the orders were accepted by the depots. This resulted in almost all the fiscal year 2011 funds carrying over on these workloads at the end of fiscal year 2011 and funds on these orders continuing to carry over at the end of fiscal year 2012. Table 7 shows the workload, description of the parts ordered, date the first fiscal year 2011 order was accepted, contract award date for buying critical parts, and key contract terms. Work on the five fiscal year 2011 workloads shown above was delayed to fiscal year 2012 or 2013 because of the timing of the award of contracts for critical parts needed to complete the Army Industrial Operations work orders. For four of the five workloads, the award of the contracts to purchase parts occurred 1 or more years after the depot received the order to perform the work. Further, as illustrated in the following examples, work was delayed in fiscal year 2012 on some of these fiscal year 2011 orders because of the terms of the contracts—the contractor can only produce a certain number per month. In August 2011, Anniston accepted five orders totaling $44 million for the conversion of 20 M1A1 Abrams tanks to Assault Breacher Vehicles (ABV). In order to perform the work, Anniston had to remove the old turrets, fabricate new turrets, and convert the tank hulls to address the vehicles’ new function—to breach mine fields and barrier obstacles. Anniston officials told us that they could not begin work on the orders until they received material (government-furnished equipment) that was being procured by the program manager. However, the contract to procure the government-furnished equipment was not awarded until December 2011—fiscal year 2012— and the contractor could only produce three kits per month. Anniston began work on the ABV orders in May 2012 so that the work fell in line with the delivery of the government-furnished equipment. As a result, the first vehicle on the order was not completed until October 2012—fiscal year 2013—and the production of the vehicles at Anniston was limited to three per month to match the supplier’s ability to manufacture the needed kits. The depot carried over the entire amount into fiscal year 2012 and $25 million into fiscal year 2013. During fiscal year 2012, Anniston accepted five more ABV Army orders totaling $48.3 million to produce 22 ABVs and almost all the work—$46.3 million—carried over into fiscal year 2013. Similar to the orders accepted in fiscal year 2011, Anniston had to maintain a low monthly production quantity because one of the contractors could only deliver three material kits per month. A photo of an ABV is shown in figure 4. Letterkenny received four orders in the second quarter of fiscal year 2011 totaling $156 million for 17 Force Provider modules.Letterkenny procures equipment to fill approximately 43 of the 101 containers that make up each new build module. The 43 containers hold approximately 346 different types of new equipment comprising thousands of individual components. Letterkenny carried over almost the entire amount into fiscal year 2012—$156 million. In fiscal year 2012, Letterkenny accepted three more orders totaling $18 million for six new Force Providers. However, a contract was not awarded to buy containers until August 2012—over a year after receiving the first fiscal year 2011 order. Letterkenny received about 700 containers from August through December 2012. In the meantime, work essentially stopped on the new build program in July and August 2012 because of the lack of sufficient containers to complete 13 of the 17 modules on the fiscal year 2011 orders and 6 modules on the fiscal year 2012 orders. Without the containers, the depot could not complete work on the fiscal years 2011 and 2012 orders and reduce carryover on the orders. The depot carried over $79 million into fiscal year 2013 associated with the fiscal year 2011 and 2012 new build orders. A photo of a Force Provider is shown in figure 5. The work that Red River and Letterkenny performed on the HMMWV in fiscal years 2011 and 2012 outpaced the ability of the supply chain to provide the parts. In June 2011, Red River and Letterkenny accepted orders to overhaul 7,971 HMMWVs. At the end of fiscal year 2011, the amount of the orders was $839 million, and of that amount, $837 million carried over into fiscal year 2012. In order to perform the work on these vehicles, Letterkenny had to reestablish its production line since it was previously shut down because of the lack of HMMWV work. Further, both depots had to hire contractor personnel to staff the production line and establish a supply chain so that the depots could obtain the parts to perform the work. Both depots encountered problems with obtaining sufficient quantities of parts, such as doors, gunner protection kits, windshields, turret bearings, and half shafts, to perform the work. This parts problem was exacerbated at Red River when the depot went to a double shift on disassembling and assembling HMMWVs in April 2012. As a result of the parts shortage, at the end of fiscal year 2012, Red River and Letterkenny completed 767 of the vehicles ordered in fiscal year 2011. In addition, the depots assembled another 4,254 vehicles but the vehicles were missing parts. The depots carried over $356 million of these orders into fiscal year 2013. A photo of a HMMWV is shown in figure 6. As discussed previously, the implementation of LMP changed the revenue recognition business rules and resulted in increased carryover in fiscal years 2011 and 2012. The point in time when revenue is recognized is important because when an Army Industrial Operations activity performs work, it earns revenue, thereby reducing carryover. As discussed earlier, the Army implemented LMP at two Industrial Operations activities in fiscal year 2009 and 10 Industrial Operations activities in fiscal year 2011. While the Army could not determine the extent of the change on Industrial Operations’ carryover resulting from implementation of LMP, officials at Army headquarters, LCMCs, and the depots stated that they believed carryover increased because of the change in revenue recognition rules. For example, Corpus Christi reported in fiscal year 2011 that it accepted orders totaling about $455 million to repair and upgrade 57 Black Hawk helicopters. The established repair process time for the UH-60L Black Hawk averaged approximately 1 year in fiscal year 2011. The change in the business rules for recognizing revenue on parts and material because of the implementation of LMP in May 2009 caused revenue on the fiscal year 2011 orders to be recognized, or earned, in fiscal year 2012. Prior to the implementation of LMP, the depot’s business rules recognized revenue on parts and material on the Black Hawk when they were turned into the supply system during the disassembly process and replacement parts were requisitioned and received—usually within the first 90 days. Under LMP, revenue for parts and material is not recognized until parts and material are brought to the assembly area for installation on the aircraft. Final assembly of the aircraft occurs about 266 days after acceptance of the orders. As a result, at least in part due to the change in revenue recognition rules under LMP, at the end of fiscal year 2011, Corpus Christi documentation showed that it carried over into the next fiscal year $425 million associated with these orders—93 percent of the order amounts. A photo of a Black Hawk helicopter is shown in figure 7. Because of the size of the carryover at the end of fiscal year 2011, the Army recognized that it needed to improve its management of carryover, and in April 2012, the Army formed a working group. The working group identified, among other things, that it was the Army Industrial Operations activities’ responsibility to ensure that they have the necessary resources for performing the work. The working group identified the following six elements that the Industrial Operations activities should review when receiving orders for work to be performed: Skilled labor. Are there adequate labor hours available with the required skills to execute the work? Parts. Is there sufficient stock on hand or in the supply pipeline to complete the program schedule on time? Tools and equipment. Are all required special tools, fixtures, jigs, and stands on hand or being acquired? Process. Is there approved technical data, including a defined scope of work, documented processes, and internal process capacity, available to complete the work? Requirements. Is there an understanding of the total Army requirements as well as the required depot production for the workload after considering back orders, average monthly demand, potential surge requirements, and alternate source of repair? Funding. Is there available funding and rate of required funding to support production? Further, the working group determined that if the Industrial Operations activities determine that they do not have the capability to perform any of the six elements, the activities should be required to send orders to their management for review and approval. Also, for any areas for which resources are not available, the activities should be required to develop a plan on how they will resolve the issue, such as obtaining parts that are not in the supply system. To convey the working group’s results, the Army plans to issue two policy memos during fiscal year 2013. According to Army officials and documentation we reviewed, one policy memo will address (1) the LCMCs’ and activities’ responsibilities for acceptance of orders and performing the work (regardless of the appropriation funding the orders) and (2) the six elements discussed above. The second policy memo will address orders funded with procurement appropriations. According to Army officials, this second policy memo will discuss various aspects of procurement-funded orders, including different types of work such as pilot programs, prototypes, fabrication, and data requirements. According to the Army, this action should result in a better alignment of the work to the customer delivery schedule and prevent the acceptance of workloads that are not executable in a specific fiscal year. We agree that these actions are needed for orders placed with Industrial Operations. If properly implemented, the Army’s actions should help address our concerns that (1) the scope of work was not well defined and (2) parts were not available to perform the work. However, the Army has not issued the planned policy memos. The memos should contain specific timetables for implementing their planned actions and establish procedures to include steps to be followed by Industrial Operations in evaluating orders received from customers. The work that Army Industrial Operations performs supports combat readiness by restoring equipment to a level of combat capability commensurate with a unit’s future mission. Reliable budgeted information on Industrial Operations, including carryover information, is essential for Congress and DOD to perform their oversight responsibilities, including reviewing and making well-informed decisions on Industrial Operations budgets. The Army reported in its budgets to Congress that Industrial Operations’ adjusted carryover was under the allowable amount at the end of fiscal years 2011 and 2012. However, budget estimates for carryover were consistently less than the actual amounts each year from fiscal years 2006 through 2012 primarily because of Industrial Operations underestimating new orders from customers, particularly procurement- funded orders. Budget estimates could be improved by addressing the major factors that caused variations between budgeted and actual amounts, including improved communication between customers and Industrial Operations. The Army recognized that it needed to improve the budgeting and management of carryover and formed a working group in April 2012. The working group identified a number of actions that are under way and planned to help remedy this situation. However, the Army has not yet implemented these planned actions and does not have a timetable for implementation. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions to improve the budgeting and management of Army Industrial Operations’ carryover: Issue the planned working group policy memos and establish a timetable for implementing these actions for improving the management of carryover. Implement the working group’s planned actions to improve the budgeting for new orders to be received by Army Industrial Operations. Establish procedures, including required steps, to be followed by Army Industrial Operations activities in evaluating orders received from customers to ensure that the activities have resources (such as parts and materials, skilled labor, tools, equipment, technical data, and funding) to perform the work. DOD provided written comments on a draft of this report. In its comments, which are reprinted in appendix II, DOD concurred with the three recommendations and cited actions planned or under way to address them. Specifically, DOD commented that by the end of fiscal year 2013, the Army will issue policy memorandums with actions to be implemented by the beginning of the second quarter of fiscal year 2014. The memorandums will address the following: (1) the responsibilities and criteria of the Industrial Operations activities to accept new orders whether budgeted or unbudgeted and (2) new procurement-funded orders to better align the work to customer delivery schedules. Further, DOD stated that the Army’s fiscal year 2014 budget guidance included direction for program managers to identify planned depot workload in procurement budgets beginning in the fiscal year 2014 President’s Budget cycle and that the same direction will be included in subsequent budget cycle guidance. Finally, DOD indicated that the Army will establish procedures to implement the policy memorandum on the acceptance of new orders beginning with the second quarter of fiscal year 2014. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine whether, and to what extent, Army Industrial Operations’ actual carryover exceeded the allowable amount of carryover from fiscal years 2006 through 2012, we obtained and analyzed Industrial Operations reports that contained information on actual carryover and the allowable amount of carryover for fiscal years 2006 through 2012. We analyzed carryover beginning with fiscal year 2006 because the Army’s fiscal year 2006 budget reported a consolidation of the Army Working Capital Fund’s depot maintenance and ordnance activity groups into Industrial Operations, making a comparison to prior fiscal years difficult. We met with responsible officials from Army headquarters and Army Materiel Command (AMC) to obtain their views on the causes for variances between actual carryover and the allowable amount. Further, we identified and analyzed any adjustments made by the Army that increased the allowable carryover amounts or reduced the amount of carryover. We reviewed the Department of Defense’s guidance for exceptions to the carryover policy and discussed any exceptions with Office of the Under Secretary of Defense (OUSD) (Comptroller) and Army headquarters officials to obtain explanations for the exceptions. To determine whether, and to what extent, Army Industrial Operations’ budget information on carryover from fiscal years 2006 through 2012 approximated actual information, and if not, whether the Army took actions to align the two, we obtained and analyzed Industrial Operations reports that contained information on budgeted and reported actual new orders, revenue, and actual carryover data for fiscal years 2006 through 2012. We also analyzed the new order data by the appropriations financing the orders to determine whether there were variances by appropriations for budgeted and reported actual new order amounts for the 7-year period. We met with responsible officials from Army headquarters and AMC to obtain their views on causes for variances between budgeted and reported actual new order, revenue, and carryover amounts. We also met with these officials to discuss actions the Army was taking to improve budgeting and management of carryover. To determine whether, and to what extent, Army Industrial Operations’ carryover increased during fiscal years 2011 and 2012, and causes for the carryover for those 2 fiscal years, we met with responsible officials from Army headquarters, AMC, the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology), the Life Cycle Management Commands (LCMC), and four depots that had orders with high dollar amounts of fiscal year 2011 carryover to identify contributing factors that caused the carryover. We also performed walk-throughs of the Army’s Anniston, Corpus Christi, Letterkenny, and Red River depots’ operations to observe the work being performed by the depots and discussed with officials causes for workload carrying over from one fiscal year to the next. Further, to corroborate the information provided by Industrial Operations officials, we selected eight weapon system workloads with high dollar amounts of fiscal year 2011 carryover from four Army depots. The carryover associated with these workloads represented about 35 percent of Industrial Operations’ total carryover at the end of fiscal year 2011 and was one of the top five workloads with carryover at each depot. We followed up on the status of carryover on these eight workloads at the end of fiscal year 2012. We obtained and analyzed orders and amendments associated with these workloads and discussed the information in these documents with the depots to determine the causes for the carryover. We discussed the carryover information on these workloads with officials in the program management offices at the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology) to determine their roles and responsibilities in providing orders to the depots and their impact on carryover. We also discussed and obtained documentation on the actions the Army is taking to better manage and reduce carryover. We obtained the financial and logistical data in this report from official budget documents and the Army’s logistical system. To assess the reliability of the data, we (1) reviewed and analyzed the factors used in calculating carryover for the completeness of the elements included in the calculation, (2) interviewed Army officials knowledgeable about the carryover data, (3) reviewed GAO reports on depot maintenance activities, and (4) reviewed customer orders submitted to Industrial Operations to determine whether they were adequately supported by documentation. In reviewing these orders, we obtained the status of the carryover at the end of the fiscal year. We also reviewed the Commander’s Critical Item Reports for fiscal years 2011 and 2012 that provide information on inventory items (spare parts) needed by the depots. In reviewing the reports, we determined whether needed inventory items were associated with customer orders that had carryover for the workloads that we reviewed. On the basis of procedures performed, we have concluded that these data were sufficiently reliable for the purposes of this report. We performed our work at the headquarters of the OUSD (Comptroller), the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology), and the Office of the Assistant Secretary of the Army (Financial Management and Comptroller), Washington, D.C.; AMC, Huntsville, Alabama; the Aviation and Missile Command LCMC, Huntsville, Alabama; the Tank, Automotive and Armaments Command LCMC, Warren, Michigan; the Anniston Army Depot, Anniston, Alabama; the Corpus Christi Army Depot, Corpus Christi, Texas; the Letterkenny Army Depot, Chambersburg, Pennsylvania; the Red River Army Depot, Texarkana, Texas; and the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology) at Huntsville, Alabama, and Warren, Michigan. We conducted this performance audit from May 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Greg Pugnetti (Assistant Director), Steve Donahue, Keith McDaniel, and Hal Santarelli made key contributions to this report.
The 13 Army Industrial Operations activities support combat readiness by providing depot maintenance and ordnance services to keep Army units operating worldwide. To the extent that Industrial Operations does not complete work at year-end, the work and related funding will be carried over into the next fiscal year. Carryover is the reported dollar value of work that has been ordered and funded by customers but not completed by Industrial Operations at the end of the fiscal year. As requested, GAO reviewed issues related to Army Industrial Operations’ carryover. GAO’s objectives were to determine whether, and to what extent, Army Industrial Operations’ (1) actual carryover exceeded allowable carryover for fiscal years 2006 through 2012; (2) budget information on carryover approximated actual information for fiscal years 2006 through 2012, and if not, whether the Army took actions to align the two; and (3) carryover increased during fiscal years 2011 and 2012 and causes for the carryover. To address these objectives, GAO reviewed relevant carryover guidance, analyzed carryover and related data for Industrial Operations, and interviewed Army officials. From fiscal years 2006 through 2012, Army's Industrial Operations' actual carryover was under the allowable amounts in 5 of the 7 fiscal years. However, carryover more than doubled during that period, reaching a high of $5.8 billion in fiscal year 2011. Army officials stated that fiscal year 2011 was an abnormal year because Industrial Operations (1) received more orders than it had ever received--$7.5 billion in new orders--and (2) implemented a system called the Logistics Modernization Program (LMP) that changed the business rules for recognizing revenue and therefore resulted in carryover being higher than it would have been under the prior system. Army officials anticipate carryover decreasing in fiscal year 2013. According to the Army fiscal year 2014 budget, the Army expects carryover to be under $4 billion at the end of fiscal year 2013. The Army's budget estimates for carryover were less than the actual carryover amounts each year beginning in fiscal year 2006--at least $1.1 billion each year. GAO's analysis showed that the actual amounts of carryover exceeded budgeted amounts primarily because (1) the Army underestimated new orders to be received from customers for all 7 years reviewed, particularly with respect to procurement funded orders, and (2) for fiscal year 2011, Industrial Operations performed over $1 billion less work than budgeted because Army officials were unaware of the impact that LMP would have on revenue when developing the fiscal year 2011 budget. The Army is taking actions intended to better align the customers' budgets with Industrial Operations' budgets. Industrial Operations' carryover grew significantly in fiscal years 2011 and 2012 to represent about 12.7 and 9.5 months of work, respectively. GAO found three causes for the carryover: (1) the scope of requested work was not well defined, (2) parts were not available to perform the work, and (3) revenue recognition business rules were changed as part of the implementation of LMP. The Army formed a working group in April 2012 that identified actions to help reduce carryover. However, these actions have not been implemented and no timetable for implementation has been set. GAO is making three recommendations to the Department of Defense (DOD) that are aimed at implementing the planned actions identified by the Army's working group to improve the budgeting and management of carryover. DOD concurred with GAO's recommendations and cited related actions planned or under way.
The Public Safety Partnership and Community Policing Act of 1994, as amended, authorizes grants to states, units of local government, Indian tribal governments, other public and private entities, and multi- jurisdictional or regional consortia for a variety of community policing- related purposes. Among other things, this includes the hiring and rehiring of law enforcement officers for deployment in community policing and developing and implementing innovative programs to permit members of the community to assist law enforcement agencies in the prevention of crime in the community. The act also requires that grantees not supplant state and local funding, but rather use the federal funds for activities beyond what would have been available without a grant. To administer the grant funds authorized by the act, the Attorney General created the COPS Office in October 1994. Since 1994, the COPS Office has awarded roughly $14 billion to advance community policing through its various grant programs. The COPS Office defines community policing in its CHP applications and Grant Owner’s Manual, issued annually, as “a philosophy that promotes organizational strategies, which support the systematic use of partnerships and problem-solving techniques, to proactively address the immediate conditions that give rise to public safety issues, such as crime, social disorder, and fear of crime.” The CHP grant applications describe some of these terms: community partnerships: collaborative partnerships between the law enforcement agency and the individuals and organizations they serve to both develop solutions to problems and increase trust in police; organizational transformation: the alignment of organizational management, structure, personnel and information systems to support community partnerships and proactive problem-solving efforts; and problem solving: the process of engaging in the proactive and systematic examination of identified problems to develop effective responses that are rigorously evaluated. A characteristic of community policing is its emphasis on proactive policing—an approach to preventing crime—which is contrasted with traditional, reactive policing—an approach that responds to crime—both of which the interactive figure 1 illustrates. Move the mouse over the icons to view more information. This additional information is also reproduced in appendix II for readers of printed copies. Reactive policing responds to crime after it has occurred. Proactive policing attempts to prevent crime from occurring. $ From fiscal years 2008 through 2012, the COPS Office managed 10 programs designed to advance community policing. As table 1 illustrates, these 10 programs provided funding to target crime issues, such as school violence, as well as to hire officers or develop crime-fighting technology, among other things. As table 1 indicates, CHP accounted for 68 percent of the funds awarded through the COPS Office’s various grant programs. The program provides these funds over a 3-year term, on a reimbursable basis, meaning that the COPS Office approves grants for a specified number of officer hires or rehires—in cases where officers have previously been laid off—but provides the funding to the law enforcement agencies once these officers are onboard. Grantees must use the funds to hire or rehire additional officers for deployment in community policing or can redeploy a commensurate number of experienced locally funded officers to community policing after the entry-level officers are hired with CHP funds. By law, each year CHP funding must be split in such a way that the total grant funding awarded to each eligible state—meaning the sum of the grants awarded to applicants in that state—equals at least one-half of 1 percent of the total CHP funding appropriated by Congress for that year. At the same time, the law requires that CHP funding be evenly split between entities serving jurisdictions with populations exceeding 150,000 people and those serving jurisdictions with populations of 150,000 or fewer. Specific award provisions, such as salary and benefit caps per officer funded; grantee fund-matching requirements; and other nuances, including a recent emphasis on the hiring of veterans, have varied each year since the CHP’s first authorization, in 1994. For example, in 2008 and 2012, pursuant to the statutory requirements for the grant program, grantees were required to match the CHP award with at least 25 percent of local nonfederal funds and salary caps of $75,000 and $125,000, respectively, applied. However, under the American Recovery and Reinvestment Act (Recovery Act) these requirements did not apply in 2009, 2010, and 2011. Table 2 highlights changes in the CHP during the past 5 fiscal years that affected the amount of funding available to applicants as well as which applicants received funding. To select grantees for CHP, the COPS Office requires applicants to respond electronically to closed-end questions and provide a narrative description of the crime problems they are facing, among other things, in their grant applications. For example, one of the close-ended questions asks applicants to add a check mark if their agencies’ strategic plans include specific goals or objectives relating to community partnerships or problem solving. Another close-ended question provides response categories for applicants to select the ways in which their agencies share information with community members. According to COPS Office officials, in consultation with the Associate Attorney General and the Deputy Attorney General, they establish weights for (1) community policing questions, (2) questions pertaining to the applicants’ fiscal health, and (3) reported crime levels. They then score the applications and award funds to those applicants with the highest scores. For example, in fiscal years 2009 and 2010, fiscal health accounted for 50 percent, crime rates accounted for 35 percent, and community policing activities accounted for 15 percent of the total score. To monitor grantee performance, the COPS Office requires, as a term and condition of its grants, that grantees participate in grant-monitoring and -auditing activities, which can include programmatic and financial reviews of their funded activities. Accordingly, COPS Office officials stated that all grantees are required to submit quarterly progress reports that provide financial and programmatic information, such as their progress in implementing the community policing plan they described in their grant applications for utilizing CHP funds to advance community policing. According to the COPS Office, the goal of its monitoring is to assess grantees’ stewardship of federal funding, performance, innovation, and community policing best practices resulting from COPS Office funding. In addition, according to the COPS Office, because of the number of COPS Office grantees, the COPS Office selects a limited number of grants to monitor based upon a grantee’s level of risk. In addition to the size of the grant award, such risk factors include, but are not limited to, whether or not the grantee has prior federal grant experience, has submitted late reports of its progress or failed to submit progress has requested grant extensions. As the interactive map in figure 2 illustrates, CHP grant awards were distributed throughout the United States from fiscal years 2008 through 2012. The interactive map can be accessed here: http://www.gao.gov/products/GAO-13-521 As figure 3 illustrates, 48 percent of the funding was awarded to grantees in six states—California, Florida, Michigan, New Jersey, Ohio, and Texas. Across all the states, grantees in California received the highest level of total CHP funding from 2008 through 2012. Specifically, total CHP awards in California equaled approximately $360 million, or more than 20 percent of the total CHP funding awarded. Officials from the COPS Office cited several factors that influenced the allocation of grant funds across the states and territories. In particular, officials pointed to the population-based statutory provision described previously, which requires the COPS Office to allocate 50 percent of available grant funding to jurisdictions with populations exceeding 150,000 and 50 percent to jurisdictions with a population of 150,000 or fewer. Officials noted that some states—for example, California—have more cities with populations exceeding 150,000 compared with other states. This enables a smaller number of states to compete for half of the total grant funding, while a greater number of states without cities of this size compete for the remaining half of the total. Further, these large cities tend to receive larger awards because they deploy comparatively more officers than smaller cities. Apart from a separate statutory provision, also described previously, which requires that each state receive at least one- half of 1 percent of the total CHP funding appropriated by Congress each year, COPS Office officials emphasized that a grantee’s particular location is not prioritized over the application categories of community policing, crime data, and fiscal health. Regarding fiscal health, officials noted that certain states have been disproportionately affected by fiscal distress, a factor that is directly reflected in the fiscal health component of the CHP application. Finally, the number of law enforcement agencies— and thus potential applicants—varies across states, which contributes additional variation in how funding is ultimately distributed. For grantees awarded the same number of officers, differences were driven mainly by variation across grantees’ respective entry-level officer salaries and benefits—the only costs CHP allows. However, this variation was more prominent during years when salary and benefit levels were not statutorily capped: 2009 through 2011. Thus, during the period 2009 through 2011, grantees with higher officer salary and benefit levels generally received more CHP funding relative to other CHP grantees to hire, rehire, or prevent layoffs for the same number of officers. For example, in fiscal year 2011, a grantee in California received a CHP award equivalent to its entry-level officer salary and benefits level of $150,753 per officer. In the same fiscal year, a grantee in Connecticut received a CHP award—also based on its entry-level officer salary and benefits—of $64,459 per officer per year. As a result of these local variations in per officer cost, this particular Connecticut grantee received and used 57 percent less federal funding to support each officer it hired or rehired compared with its California counterpart in this example. According to COPS Office officials, geographical differences in the cost of living could partly contribute to wage differences. Additionally, the availability of state and local budgetary resources to support law enforcement salaries and benefits may have affected wages. Further, COPS Office officials stated that other factors unique to certain areas of the country could account for the wage disparity that drives CHP costs. For example, some agencies may participate in more expensive state retirement systems or may not be able to set wages that align with market conditions because of union labor contract obligations. Figure 4 displays the average annual CHP-funded officer salary and benefit levels, by state and territory, for awards made from fiscal year 2009 through 2011—the years in which CHP grants awards were not capped. In contrast, statutory salary and benefit caps were in place for fiscal years 2008 and 2012; thus during these years, each grantee was limited to receiving the same per officer maximum, irrespective of local differences in salary and benefit levels. Any officer-related expenses over and above the cap were the independent funding responsibility of the grantee and were not covered by CHP funding. As a result, less variation in award amounts occurred in fiscal years 2008 and 2012—when there was a cap—as compared with fiscal years 2009, 2010, and 2011—when there was no cap. According to DOJ officials, some additional variation in average award amounts occurred in 2012 as a result of the COPS Office exercising statutory authority allowing the COPS Office Director to waive the $125,000 salary and benefit cap, as well as the matching requirement when awarding grants. In 2012, the COPS Office granted 41 local match-and-cap waivers out of the 248 applicants that had requested them. Our analysis shows that interest in CHP funding remains high with the $125,000 per officer cap. From fiscal years 2008 through 2012, the COPS Office received more requests for CHP funding in grant applications than it could accommodate. The cap for fiscal year 2013 is $125,000, and in the President’s budget request for fiscal year 2014, the cap remains at $125,000. The CHP application solicits information from applicants in accordance with statute, but the COPS office may realize benefits by revising the application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application. The statute authorizing COPS Office grant programs requires applications, including the CHP application, to gather information from applicants related to several items, including, but not limited to the applicant’s explanation of how the grant will be used to reorient its mission toward plans for obtaining support at the conclusion of federal funding, and detailed implementation plan and long-term strategy that reflects consultation with community groups and appropriate private and public agencies. community policing or enhance its involvement in or commitment to community policing, specific public safety needs, inability to address the needs without federal assistance, The statute does not further specify the content of these items, particularly the content of the detailed implementation plan and long-term strategy. However, the 2012 CHP application requires applicants to provide related information, such as how applicants plan to reorient their mission toward or enhance their involvement in or commitment to community policing. Specifically, COPS Office officials reported that the Law Enforcement & Community Policing Strategy section of the application is intended to obtain information from applicants to address the requirements of a detailed implementation plan and long-term strategy. For instance, this section requires applicants to include information on the crime problem that will be addressed with grant funds, information sources that will be used to improve the understanding of the problem and determine whether the response was effective, and the partnerships the agency will form. The application further requires applicants to indicate the community policing activities their entire agency was currently engaged in as well as those activities their organization intended to enhance or initiate with CHP funds. The fiscal year 2012 application notes that the COPS Office recognizes that CHP-funded officers will engage in a variety of community policing activities and strategies, including participating in some or all aspects of the applicant’s implementation plan. However, the application does not specifically ask applicants to explain how CHP-funded officers will be deployed in community policing—the primary purpose of the CHP program as expressed by the statute. For instance, the application does not ask applicants to provide information on what community policing activities, such as attending community meetings, CHP-funded officers will be undertaking. The Domestic Working Group’s guide for improving grant accountability provides best practices for designing grant applications, including specific elements that are recommended to be addressed in grant applications. The Domestic Working Group, composed of federal government inspectors general and chaired by the Comptroller General of the United States, created the guide to share useful and innovative grant management approaches with government executives at the federal, state, and local levels. Specifically, the Domestic Working Group’s guide recommends that agencies require applicants to submit a detailed narrative as evidence of proper work planning to obtain and evaluate information from applicants when making award decisions, and include information to link grant activities with results, which is often referred to as logic modeling. As part of the logic model approach, applicants should, among other efforts, identify the need for funding, their approach to using the funds, specific activities that are crucial to the success of the program, and desired objectives and benefits anticipated—and then logically connect these efforts to a plan for measuring results. We found through our analysis of a systematic random sample of 103 CHP-funded applications for fiscal years 2010, 2011, and 2012 that the application could be enhanced by applying these best practices to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application. According to our analysis of the application sample, we estimate that less than 20 percent of the applications funded in these years contained evidence showing how additional officers would be deployed in support of community policing. Several of the questions in the 2010, 2011, and 2012 applications ask for information on the agency-wide actions grantees plan to undertake to facilitate community policing. COPS Office officials reported that individual CHP-funded officers are expected to implement the items indicated in the implementation plan. These actions could include implementing recruitment and hiring practices that reflect an orientation toward problem solving and community engagement, enhancing information technology systems, and implementing organizational performance measurement systems that include community policing metrics. COPS Office officials agreed that the application could be clearer by stating the requirement that CHP-funded officers should be the ones who are specifically engaged in CHP-funded activities. Revising the application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application, consistent with best practices, would better position the COPS office to ascertain from applicants how these particular officers’ activities would advance community policing. To help ensure that grantees are implementing the activities and meeting the financial requirements they committed to in their respective applications, the COPS Office is required to monitor at least 10 percent of its open, active grant funding annually. According to the Domestic Working Group’s guide for improving grant accountability, it is important that agencies identify, prioritize, and manage potential at-risk grantees. Consistent with this best practice and to fulfill its statutory monitoring requirement, the COPS Office uses a risk-based approach to select which grantees to monitor and visit using its Grant Assessment Tool (GAT), which is currently used by the COPS Office to assess grantee risk. The GAT uses criteria to generate individual risk scores, as illustrated in table 3, and a final, comprehensive risk assessment score is computed for grantees. Once the monitors review what the GAT has generated, they are to develop a plan for monitoring those grantees with the highest risk scores. According to the COPS Office’s monitoring standards—a guide that describes the responsibilities of grant monitors—the COPS Office monitors these grantees in a number of ways, including, but not limited to, on-site monitoring, office-based grant reviews, and complaint and legal reviews. During on-site monitoring visits, monitors are required by the monitoring standards to review and compare the proposed projects and activities contained in grant applications and quarterly progress and financial status reports with those of the grantees’ performance and progress in carrying them out. Upon completion of their visits, monitors are required to document their observations and assessments in a grant- monitoring report and cite any grant compliance issues, which may be cited in categories including community policing, retention, allowable costs, and the source and amount of matching funds. Office-based grant reviews, which are used to provide detailed monitoring for those grantees that are not selected for on-site monitoring using the GAT, are similar to on-site reviews in that monitors are required to review grantee documentation, including the application, and follow up directly with the grantee to collect any additional information and documentation on how grantees are using funds. This type of monitoring, according to COPS Office monitoring standards, allows the COPS Office to monitor a larger number of grantees than would be possible through on-site monitoring alone. In addition to these monitoring methods, the COPS Office also uses complaint reviews to investigate internal and external complaints, such as those raised by the media and citizens, regarding grantee noncompliance. The COPS Office’s Legal Division also conducts additional monitoring related to, among other things, supplanting—using federal funds to replace state or local funds—and DOJ IG investigations of grantees involving fraud. According to the monitoring standards, all of these monitoring efforts help the COPS Office determine whether the grantees are complying with the requirements of the grant and that funds are spent properly. Accordingly, the COPS Office uses these various monitoring methods to identify any grant violations, such as not using the funds to hire officers for deployment in community policing, and recommend resolutions to these violations. In cases where the grantee has failed to remedy violations identified by the monitor, the grantee, according to the monitoring standards, may be faced with adverse current and future determinations regarding its suitability for receiving grant funds, the suspension or termination of grant funds, repayment of expended grant funds to the COPS Office, and even criminal liability in the event of fraud. The authorizing statute for the COPS grant programs, including CHP, requires that grantees not supplant state and local funding, but rather use the federal funds for activities beyond what would have been available without a grant. As a condition of accepting COPS Office funding, grant applicants must certify they will use grant funding only to increase the total amount of funds available for the hiring or rehiring of law enforcement officers and not supplant state and local funding. To identify supplanting risks, the COPS Office developed standards for monitors to use in assessing the potential for supplanting, which is one of the compliance issues monitors are required to evaluate. Monitors can use these supplanting standards in all forms of monitoring, including on-site, office-based desk, complaint, and Legal Division reviews. The standards contain clear guidance for identifying potential noncompliance with supplanting standards. For example, according to the COPS Office’s grant monitoring standards and, as illustrated in table 4, there are four major patterns of risk associated with supplanting. The CHP supplanting standards also require grant monitors to conduct an analysis and review of supporting evidence to ensure grantees have not engaged in supplanting. Some of the acceptable documentation, according to the COPS Office, can be: budget documents that can show the replacement rate of officers has documentation that shows the grantee has experienced fiscal distress, or city council minutes showing there has been difficulty in local hiring. The standards do not specify how monitors should document their analysis and conclusions about potential supplanting issues in the monitoring reports they prepare after site visits. According to COPS Office officials, these reports are a critical component of the monitoring process. The COPS Monitoring Operations Manual—a technical guide for monitoring—requires monitors to identify and provide relevant details in monitoring reports where supplanting is identified. However, it does not require monitors to document their analysis and conclusions in instances in which the determination is ultimately made that supplanting has not occurred. As a result, it may be unclear how the monitors assessed these cases to reach conclusions that supplanting had not occurred in these instances. In our review of the monitoring reports for 39 of 55 grantees that had already begun to use CHP funds and were visited by grant monitors, we found 21grantees for which there was a pattern of risk for potential supplanting. For 16 of these 21, we concluded that the site visit reports clearly documented the analysis and conclusions reached by the monitors regarding supplanting issues. For example, one monitor noted in a site visit report that potential supplanting existed because a police department failed to fill local vacancies at the same time it hired officers using COPS grant funds. The monitor determined that there was no violation for supplanting based on information provided during and after the site visit demonstrating that the department was taking active and timely steps to fill local vacancies and that the department was prohibited from filling vacancies earlier because of a town-wide spending freeze. The site visit report listed the documentation that the monitor reviewed in making a determination that there was no violation, including copies of budget documents demonstrating town-wide cuts in personnel and the town’s fiscal distress, a memorandum implementing a town-wide spending freeze, an online job posting for the vacant officer positions, and the police department’s request to the town for authorization to fill the vacancies. However, for the remaining 5 of 21 grantees, we found that the monitors did not document their assessments of supplanting issues, and it was not clear how they reached conclusions regarding potential supplanting. The reports for these 5 grantees indicated that there were delays in filling vacancies for locally funded officer positions at the time when officers were hired for CHP-funded positions. For example, in one report, the data showed that there were over 50 vacant locally funded positions in fiscal year 2010 that continued to be unfilled in fiscal year 2011, when the same department hired 27 officers with COPS funding. When the monitor visited the department in August 2012, there were still 59 locally funded vacancies. The site visit report noted that the department anticipated filling the vacancies in November 2012 and did not discuss any supplanting compliance issues. The report did not provide details on documentation reviewed or other information obtained to demonstrate the analysis performed or the basis for determinations on potential supplanting issues. It was unclear from this whether or how the monitor had assessed potential supplanting issues. In following up with the COPS Office on this case, officials provided us with additional evidence that the monitor had assessed supplanting and determined it had not occurred. Specifically, the monitor obtained documentation from the police department supporting that the department had completed recruitment for the positions and was in the middle of the applicant selection process. COPS Office officials also provided us with additional information that monitors had obtained on site visits for the other 4 cases that was not included in the monitoring reports, but supported that the monitors had assessed potential supplanting issues and determined supplanting had not occurred. Including this information in the site visit reports would document that supplanting issues were properly assessed in accordance with the monitoring standards. Given the statutory prohibition against the supplanting of federal funds and the importance of documentation for agency accountability, monitors should consistently document the results of their supplanting analysis in the monitoring reports for on-site monitoring. According to Standards for Internal Control in the Federal Government, the documentation of agency activities is a key element of accountability for decisions. By enhancing the COPS Office’s monitoring guidance, such as its standards or operations manual, to require monitors to document the results of their supplanting analysis in the on-site monitoring reports for instances where the determination is made that no supplanting has occurred, the COPS Office could be better positioned to ensure that monitors are consistently assessing supplanting and that CHP funding is supplementing and not replacing state and local funding. Additionally, ensuring that monitors consistently document the results of their supplanting analysis would increase transparency and enhance oversight of CHP funds. The COPS Office awarded approximately $1.7 billion in grant funds from fiscal year 2008 through 2012 for hiring officers to advance community policing. To ensure that grantees are using the funds as intended by the program, the COPS Office’s CHP application collects information required by statute, including information on how applicants will implement community policing on an agency-wide scale. However, the application does not require prospective grantees to provide information on the specific community policing activities of CHP-funded officers or a commensurate number of experienced locally funded officers. Revising the application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application, consistent with best practices, would better position the COPS office to ascertain from applicants how these particular officers’ activities would advance community policing. In addition, we found that while the COPS Office has developed standards and an operations manual for monitors to use in assessing the potential for supplanting, the COPS Office’s monitoring standards and operations manual do not require monitors to document their analysis and conclusions in instances in which the determination is ultimately made that supplanting has not occurred. We found that for 5 of the 21 grantees for whom we identified as at risk for supplanting, the monitors included information in the monitoring reports on supplanting, but did not document their assessments of the supplanting issues. Enhancing the COPS Office’s monitoring guidance, such as its standards or operations manual, to require monitors to document the results of their supplanting analysis in the on-site monitoring reports for instances where the determination is made that no supplanting has occurred could better position the COPS Office to ensure that monitors are consistently assessing supplanting and ensuring that CHP funding is supplementing and not replacing state and local funding. To further enhance the accountability of the CHP, the Attorney General should direct the COPS Office Director to take the following two actions: 1. revise the CHP application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application; and 2. enhance the COPS Office’s guidance, such as its monitoring standards or operations manual, by requiring monitors to document the results of their supplanting analysis in on-site monitoring reports for instances where the determination is made that no supplanting has occurred. . We provided a draft of this report to DOJ and the COPS Office for review and comment. The COPS Office provided written comments on the draft report, which are summarized below and reproduced in full in appendix III. The COPS Office concurred with the two recommendations in the report and identified actions planned to implement the recommendations. The COPS Office also discussed concerns it had with the discussion of the grant application and wording of the second recommendation in the draft report. The COPS Office concurred with the first recommendation, that the COPS Office revise the CHP application to clarify for applicants that CHP- funded officers are required to be the personnel specifically engaged in the community policing activities described on the application. The COPS Office stated that, in response to the recommendation, it clarified in the current CHP Grant Owner’s Manual and will clarify in subsequent years’ CHP applications that the questions in the grant application apply not only to the agency overall but to the CHP-funded officers as well. Once the COPS Office has taken action to fully implement this recommendation, it will be better positioned to ascertain from applicants that officers’ activities would advance community policing. While the COPS Office concurred with the recommendation, it raised concerns in its letter about how we characterized the way the COPS Office collects information via the CHP application on the activities of CHP-funded officers. Specifically, the COPS Office disagreed with the statements that (1) the 2012 CHP application does not specifically ask applicants to explain how CHP-funded officers will be deployed in community policing and that (2) less than 20 percent of the applications funded in 2010, 2011, and 2012 contained evidence showing how additional CHP-funded officers would be deployed to community policing. According to the COPS Office, the CHP application contains over 70 individual close-ended questions and 3 narrative questions regarding activities that CHP-funded officers and agencies will commit to as a requirement of the grant. The report acknowledges that the COPS Office collects an array of information from applicants on the agency-wide activities they plan to conduct. However, our analysis—including the analysis of a systematic random sample of CHP-funded applications— was intended to demonstrate the extent to which CHP applications contained information about how additional officers would be deployed in community policing in the absence of the application not specifically asking applicants to describe which community policing activities individual CHP-funded officers will undertake. Revising the application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application, consistent with best practices, would better position the COPS Office to ascertain from applicants how these particular officers’ activities would advance community policing. The COPS Office also disagreed with a statement in the draft report that the COPS Office stated that there could be benefits to revising the application to more clearly delineate the activities in which CHP-funded officers should be engaged. Rather, the COPS Office stated in its letter that the application could be clearer by stating that the office is requiring that COPS-funded officers should be the ones who are specifically engaged in CHP- funded activities. We modified the recommendation and related language in the report to reflect this point. We provided the modified recommendation language to the COPS office, and on September 19th in an e-mail from CHP program officials, the office concurred. The COPS Office concurred with the second recommendation to enhance the COPS Office’s monitoring guidance by requiring monitors to document the results of their supplanting analysis in on-site monitoring reports for instances where the determination is made that no supplanting has occurred. The office stated that it had checks and balances in its monitoring practices on the review of grantee documents and guidance for documenting analysis results when supplanting is identified. While the COPS Office concurred with the recommendation, it noted in its letter that our recommendation as originally worded implied that the existing monitoring guidance does not require grant monitors to document the results of their supplanting analysis for cases in which supplanting has been identified. Since the COPS Monitoring Operations Manual requires monitors to identify and provide relevant details in the monitoring reports regarding instances in which supplanting has occurred, the COPS Office requested that we amend the recommendation with language stating that the monitoring reports be enhanced by ensuring that monitors document the results of their supplanting analysis in instances that do not give rise to supplanting concerns. We adjusted the recommendation and related language accordingly to clarify this point. Further, in response to the recommendation, the COPS Office outlined initiatives it has implemented to modify its COPS Monitoring Operations Manual that reflect changes to data collection tools and instructions on how monitors should document their supplanting analysis, including instances in which monitors determine that no supplanting has occurred, in the monitoring reports. These actions, if implemented effectively, should address the intent of the recommendation. We are sending copies of this report to the Assistant Attorney General For Administration, and interested congressional committees. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report answers the following questions: (1) From fiscal years 2008 through 2012, in what areas of the country was the Community Oriented Policing Services (COPS) Hiring Program (CHP) funding disbursed and to what extent did award amounts vary during this period? (2) To what extent does the COPS Office’s grant application collect information about how applicants plan to use CHP-funded officers to advance community policing? (3) To what extent does the COPS Office’s monitoring process assess whether grantees are using funds to advance community policing? To address the first question, we reviewed the history of the COPS Office’s programs and related award data from the most recent 5 fiscal years—2008 through 2012—and confirmed that CHP received the largest share of award funds as compared with other programs administered by the COPS Office during this period. We also analyzed COPS Office documentation, such as Grant Owner’s Manuals and COPS Office website materials, to learn about each COPS program’s origin and emphasis. To determine which areas of the United States have received CHP funding, we analyzed the allocation of CHP grant awards—and the numbers of officers funded—by state and mapped the CHP grant award data. Additionally, we analyzed CHP award lists for fiscal years 2008 through 2012 to determine the average CHP entry-level officer salary and benefits by state, and assessed them for variation. To assess the reliability of data used in our review, we reviewed system tests that the COPS Office conducts periodically to ensure data reliability and interviewed COPS Office officials about the integrity of the data they provided to us. We determined that the data were sufficiently reliable for the purposes of our report. We also interviewed COPS Office officials responsible for managing the CHP program to verify grant program information, determine factors that could account for variations in grant award amounts, and learn about other administrative aspects of the program. To address the second question, we assessed CHP documentation, including CHP grant applications and Grant Owner’s Manuals to determine how the COPS Office’s application collects information about how applicants plan to use CHP-funded officers to advance community policing. We examined the CHP authorizing statute and best practices for grants management identified in the Domestic Working Group Grant Accountability Project’s Guide to Opportunities for Improving Grant Accountability and compared the criteria outlining promising practices for grant applications, such as designing applications to gather sufficient information for making award decisions, with the COPS Office’s approaches for designing the CHP application. To better understand these approaches, we reviewed the CHP application design, allowable activities, and the COPS Office’s criteria for selecting awardees. Specifically, we used elements of the CHP authorizing statute and key best practices for grant management to develop a data collection instrument we used to review all applications from a sample of 103 out of the 841 grants awarded during fiscal years 2010, 2011, and 2012. We chose to evaluate applications from these 3 fiscal years to provide an assessment of the most recent fiscal years’ application design. Using the data collection instrument, we reviewed the application sample to determine, among other items, the applications’ level of detail in describing applicants’ planned use of funds. Each application was first reviewed by an analyst, and the information recorded in each completed instrument was then verified by a second analyst. To ensure a selection of grants representative of the dollar amount distribution in the population of 841 awarded grants, we sorted the population by the grant dollar amount and then selected a systematic random sample of 104 grants. During our review, we discovered that 1 grant in our sample was out of scope because the grantee did not accept the grant funds and was no longer considered an active grantee. We reviewed the remaining 103 grant applications in our sample and treated them as a simple random sample for purposes of estimation. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 9 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. With the finite population correction factor, the precision for estimates drawn from this sample is no greater than plus or minus 9 percentage points at the 95 percent level of confidence. To ensure the reliability of data used in our review, we interviewed COPS Office officials about the integrity of the data they provided to us and reviewed system tests that the COPS Office conducts periodically to ensure data reliability. We also ensured that the electronic data CHP applicants submitted could not be altered once submitted to the COPS Office. We determined that the data were sufficiently reliable for the purposes of our report. We also conducted interviews with a nonprobability sample of 20 CHP grantees in California, Florida, Illinois, Massachusetts, Texas, and Wisconsin. We selected these grantees from five metropolitan areas according to criteria that included the amount of funding received by the grantees, the concentration of grantees within a metropolitan area to maximize the amount of information we could collect, and the population size served by grantees. The results of these interviews are not generalizable to all grantees, but provided insight, among other things, into how CHP grant funds are used locally to advance community policing. Finally, we interviewed COPS Office officials who oversee the application process to gather further information on the design of the application, including how the applications were scored. To address the third question, we obtained and examined the monitoring reports for 55 grantees awarded CHP grants from the 3 most recent fiscal years—2010 through 2012—with completed, available monitoring reports. The COPS Office produced these reports following the on-site monitoring visits it conducts with grantees to assess their progress and identify any compliance issues for CHP grants. Specifically, we developed a data collection instrument to review the monitoring reports to assess the extent to which the COPS Office identified and documented supplanting. We used the questions on the data collection instrument to make these assessments. Each report was first reviewed by an analyst, and the information recorded in each completed instrument was then verified by a second analyst. We then compared the COPS Office’s monitoring practices with best practices identified in the Domestic Working Group Grant Accountability Project’s Guide to Opportunities for Improving Grant Accountability; Standards for Internal Control in the Federal Government; and COPS Office guidance, such as its grant-monitoring standards. For context, we also considered findings from prior GAO work on program evaluation and the COPS Office’s management of its grant programs. To understand how the COPS Office assesses the potential for supplanting, we used COPS Office guidance on determining supplanting in reviewing the monitoring reports to identify grantees at risk of using CHP funds to replace state and local funds. Additionally, we assessed how the monitors addressed and documented instances in which grantees were vulnerable to supplanting, such as collecting and evaluating additional budget documentation from grantees. During the site visits, we interviewed CHP grantees about, among other topics, the community policing strategies they employed with CHP funding and whether their agencies had increased the number of officers dedicated to community policing relative to the number of officers hired with CHP funding. We also interviewed COPS Office officials who oversee the monitoring process about their monitoring practices and discussed with officials how monitoring provided relevant context to what grantees and the COPS Office considered progress. We also obtained the perspective of the COPS Office on the performance of its grant monitors in identifying and documenting instances of potential supplanting in the reports for on-site monitoring We conducted this performance audit from August 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix corresponds with figure 1 in the report, which is an interactive figure. Table 5 contains the text that is not accessible to readers of print copies of this report. In addition to the contact named above, key contributors to this report were Joy Booth, Assistant Director; Glenn Davis, Assistant Director; David Alexander; Carl Barden; Christine Hanson; Eric Hauswirth; Julian King; Linda Miller; Christian Montz; Robin Nye; Brian Schwartz; and Janet Temko.
Since its 1994 inception, the U.S. Department of Justice's (DOJ) COPS Office has awarded roughly $14 billion in grants to support the advancement of community policing, which is a policing approach that proactively addresses the conditions that give rise to public safety issues, such as crime and social disorder. GAO was asked to review key grant management practices within the COPS Office. This report focuses on the largest of its programs--CHP, which awards grants to law enforcement agencies to hire law enforcement officers, rehire officers who have been laid off, or prevent scheduled officer layoffs. This report addresses: (1) From fiscal years 2008 through 2012, in what areas of the country was CHP funding disbursed and to what extent did award amounts vary during this period? (2) To what extent does the COPS Office's grant application collect information about how applicants plan to use CHPfunded officers to advance community policing? (3) To what extent does the COPS Office's monitoring process assess whether grantees are using funds to advance community policing? GAO examined budget data and monitoring reports for 55 grantees, interviewed agency officials, and evaluated CHP applications from a systematic random sample of 103 CHP grants awarded from fiscal years 2010 through 2012. Nearly half of the Office of Community Oriented Policing Services (COPS) Hiring Program (CHP) funding from fiscal years 2008 through 2012 was awarded to grantees in six states, and award amounts varied considerably in certain years. During this period, state, county, and city law enforcement agencies nationwide received CHP grant awards to hire or rehire officers to advance community policing, with 48 percent of the funds awarded to grantees in California, Florida, Michigan, New Jersey, Ohio, and Texas. For grantees awarded the same number of officers, differences were driven mainly by variation across grantees' respective entry-level officer salaries and benefits. Variation in grantee award amounts were more prominent during 2009, 2010, and 2011, when salary and benefit levels were not statutorily capped, and grantees with higher officer salary and benefit levels generally received more CHP funding relative to other CHP grantees for the same number of officers. The COPS Office's CHP application collects information required by statute from grant applicants, but could be further enhanced by revising the application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application. The application asks applicants to provide information on how they plan to implement community policing agency-wide, but does not specifically ask applicants to explain how CHP-funded officers will be deployed in community policing--the primary statutory purpose of the CHP program. According to GAO analysis of a systematic random sample of 103 CHP-funded applications, GAO estimated that less than 20 percent of the applications funded in 2010, 2011, and 2012 contained evidence showing how additional officers would be deployed in community policing. The Domestic Working Group's guide for grant accountability recommends that agencies require applicants to include information describing, among other things, their approach for using the funds and the specific activities that are crucial to the success of the program. Revising the application to clarify for applicants that CHP-funded officers are required to be the personnel specifically engaged in the community policing activities described on the application, consistent with best practices, would better position the COPS office to ascertain from applicants how these particular officers' activities would advance community policing. The COPS Office's risk-based approach to monitoring assesses how grantees are using funds to advance community policing, but could be improved through additional monitoring guidance. The authorizing statute for the COPS grant programs contains a prohibition against supplanting-- using federal funds to replace state or local funds. The COPS Office developed standards and an operations manual for monitors to use in assessing the potential for supplanting. For 5 of the 21 grantees at risk for supplanting, GAO found that the monitors did not document their analyses of supplanting and it was not clear how they reached conclusions regarding supplanting. The manual requires monitors to document their supplanting analysis in instances in which supplanting is identified, but does not have this requirement for nonsupplanting. According to internal control standards, the documentation of agency activities is a key element of accountability for decisions. By enhancing the COPS Office's monitoring guidance to require monitors to document their results where the determination is made that supplanting has not occurred, the COPS Office may be better positioned to ensure that monitors are consistently assessing supplanting and that CHP funding is supplementing and not replacing state and local funding. GAO recommends that the COPS Office revise and clarify the CHP application and enhance guidance to require monitors to document their analysis results of non-supplanting in monitoring reports. The COPS Office generally concurred with the recommendations and described actions to address them.
Federal agencies are increasingly expected to demonstrate how their activities contribute to achieving agency or governmentwide goals. The Government Performance and Results Act of 1993 requires federal agencies to report annually on their progress in achieving their agency and program goals. In spring 2002, the Office of Management and Budget (OMB) launched an effort as part of the President’s Budget and Performance Integration Management Initiative to highlight what is known about program results. Formal effectiveness ratings for 20 percent of federal programs will initially be conducted under the executive budget formulation process for fiscal year 2004. However, agencies have had difficulty assessing outcomes that are not quickly achieved or readily observed or over which they have little control. One type of program whose effectiveness is difficult to assess attempts to achieve social or environmental outcomes by informing or persuading others to take actions that are believed to lead to those outcomes. Examples are media campaigns to encourage health-promoting behavior and instruction in adopting practices to reduce environmental pollution. Their effectiveness can be difficult to evaluate because their success depends on the effectiveness of several steps that entail changing knowledge, awareness, and individual behavior that result in changed health conditions or environmental conditions. These programs are expected to achieve their goals in the following ways: The program will provide information about a particular problem, why it is important, and how the audience can act to prevent or mitigate it. The audience hears the message, gains knowledge, and changes its attitude about the problem and the need to act. The audience changes its behavior and adopts more effective or healthful practices. The changed behavior leads to improved social, health, or environmental outcomes for the audience individually and, in the aggregate, for the population or system. How this process can work is viewed from different perspectives. Viewed as persuasive communication, the characteristics of the person who presents the message, the message itself, and the way it is conveyed are expected to influence how the audience responds to and accepts the message. Another perspective sees the targeting of audience beliefs as important factors in motivating change. Still another perspective sees behavior change as a series of steps—increasing awareness, contemplating change, forming an intention to change, actually changing, and maintaining changed behavior. Some programs assume the need for some of but not all these steps and assume that behavior change is not a linear or sequential process. Thus, programs operate differently, reflecting different assumptions about what fosters or impedes the desired outcome or desired behavior change. Some programs, for example, combine information activities with regulatory enforcement or other activities to address factors that are deemed critical to enabling change or reinforcing the program’s message. A program logic model is an evaluation tool used to describe a program’s components and desired results and explain the strategy—or logic—by which the program is expected to achieve its goals. By specifying the program’s theory of what is expected at each step, a logic model can help evaluators define measures of the program’s progress toward its ultimate goals. Figure 1 is a simplified logic model for two types of generic information dissemination programs. A program evaluation is a systematic study using objective measures to analyze how well a program is working. An evaluation that examines how a program was implemented and whether it achieved its short-term and intermediate results can provide important information about why a program did or did not succeed on its long-term results. Scientific research methods can help establish a causal connection between program activities and outcomes and can isolate the program’s contribution to them. Evaluating the effectiveness of information dissemination programs entails answering several questions about the different stages of the logic model: Short-term outcomes: Did the audience consider the message credible and worth considering? Were there changes in audience knowledge, attitudes, and intentions to change behavior? Intermediate outcomes: Did the audience’s behavior change? Long-term outcomes: Did the desired social, health, or environmental conditions come about? To identify ways that agencies can evaluate how their information dissemination programs contribute to their goals, we conducted case studies of how five agencies evaluate their media campaign or instructional programs. To select the cases, we reviewed departmental and agency performance plans and reports and evaluation reports. We selected cases to represent a variety of evaluation approaches and methods. Four of the cases consisted of individual programs; one represented an office assisting several programs. We describe all five cases in the next section. To identify the analytic challenges that the agencies faced, we reviewed agency and program materials. We confirmed our understanding with agency officials and obtained additional information on the circumstances that led them to conduct their evaluations. Our findings are limited to the examples reviewed and thus do not necessarily reflect the full scope of these programs’ or agencies’ evaluation activities. We conducted our work between October 2001 and July 2002 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the heads of the agencies responsible for the five cases. The U.S. Department of Agriculture (USDA), the Department of Health and Human Services (HHS), and EPA provided technical comments that we incorporated where appropriate throughout the report. We describe the goals, major activities, and evaluation approaches and methods for the five cases in this section. EPA’s Compliance Assistance Program disseminates industry-specific and statute-specific information to entities that request it to help them gain compliance with EPA’s regulations and thus improve environmental performance. Overseen and implemented by the Office of Enforcement and Compliance Assurance (OECA) and regional offices, compliance assistance consists of telephone help lines, self-audit checklists, written guides, expert systems, workshops, and site visits of regulated industries. OECA provides regional offices with evaluation guidance that illustrates how postsession surveys and administrative data can be used to assess changes in knowledge or awareness of relevant regulations or statutes and adoption of practices. EPA encourages the evaluation of local projects to measure their contribution to achieving the agency’s environmental goals. In the U.S. Department of Education, the Eisenhower Professional Development Program supports instructional activities to improve the quality of elementary and secondary school teaching and, ultimately, student learning and achievement. Part of school reform efforts, the program aims to provide primarily mathematics and science teachers with skills and knowledge to help students meet challenging educational standards. Program funds are used nationwide for flexible professional development activities to address local needs related to teaching practices, curriculum, and student learning styles. The national evaluation conducted a national survey of program coordinators and participating teachers to characterize the range of program strategies and the quality of program- assisted activities. The evaluation also collected detailed data at three points in time from all mathematics and science teachers in 10 sites to assess program effects on teachers’ knowledge and teaching practices. USDA’s Cooperative State Research, Education, and Extension Service (CSREES) conducts EFNEP in partnership with the Cooperative Extension System, a network of educators in land grant universities and county offices. EFNEP is an educational program on food safety, food budgeting, and nutrition to assist low-income families acquire knowledge, skills, and changed behavior necessary to develop nutritionally sound diets and improve the total family diet and nutritional well-being. County extension educators train and supervise paraprofessionals and volunteers, who teach the curriculum of about 10 sessions. EFNEP programs across the country measure participants’ nutrition-related behavior at program entry and exit on common instruments and report the data to USDA through a common reporting system. In addition, the Cooperative Extension System conducts a variety of other educational programs to improve agriculture and communities and strengthen families. State cooperative extension staff developed and provided evaluation guidance, supported in part by CSREES, to encourage local cooperative extension projects to assess, monitor, and report on performance. Evaluation guidance, including examples of surveys, was provided in seminars and on Web sites to help extension educators evaluate their workshops and their brochures in the full range of topics, such as crop management and food safety. In HHS, the Centers for Disease Control and Prevention (CDC) aims to reduce youths’ tobacco use by funding state control programs and encouraging states to use multiple program interventions, working together in a comprehensive approach. CDC supports various efforts, including media campaigns to change youths’ attitudes and social norms toward tobacco and to prevent the initiation of smoking. Florida, for example, developed its own counteradvertising, anti-tobacco mass media “truth” campaign. CDC supports the evaluation of local media programs through funding and technical assistance and with state-based and national youth tobacco surveys that provide tobacco use data from representative samples of students. CDC also provides general evaluation guidance for grantee programs to assess advertisement awareness, knowledge, attitudes, and behavior. The Office of National Drug Control Policy (ONDCP) in the Executive Office of the President oversees the National Youth Anti-Drug Media Campaign, which aims to educate and enable youths to reject illegal drugs. This part of the nation’s drug control strategy uses a media campaign to counteract images that are perceived as glamorizing or condoning drug use and to encourage parents to discuss drug abuse with their children. The media campaign, among other activities, consists of broadcasting paid advertisements and public service announcements that support good parenting practices and discourage drug abuse. While ONDCP oversees the campaign in conjunction with media and drug abuse experts, advertising firms and nonprofit organizations develop the advertisements, which are broadcast to the target audience several times a week for several weeks or months across various media (TV, radio, newspapers, magazines, and billboards) at multiple sites nationwide. The ongoing national evaluation is being conducted by a contractor under the direction of the National Institute on Drug Abuse (NIDA). The evaluation surveys households in the target markets to assess advertisement awareness, knowledge, attitudes, and behavior, including drug use, in a representative sample of youths and their parents or other caretakers. The programs we reviewed faced challenges to evaluating effects at each step, from conveying information to achieving social and environmental goals. Specifically, Flexible programs were hard to summarize nationally as they varied their activities, message, and goals to meet local needs. Mass media campaigns do not readily know whether their targeted audience heard the program’s message. Intended changes in knowledge, attitude, and behavior did not necessarily take place until after audience contact with the program and were, therefore, difficult to observe. Self-reports of knowledge, attitudes, and behavior can be prone to bias. Long-term behavioral changes and environmental, health, or other social outcomes can take a long time to develop. Many factors aside from the program are expected to contribute to the desired behavioral changes and long-term outcomes. Several programs we reviewed have broad, general goals and delegated to state or local agencies the authority to determine how to carry out the programs to meet specific local needs. For two reasons, the resulting variability in activities and goals across communities constrained the federal agencies’ ability to construct national evaluations of the programs. First, when states and localities set their own short-term and intermediate goals, common measures to aggregate across projects are often lacking, so it is difficult to assess national progress toward a common goal. Second, these programs also tended to have limited federal reporting requirements. Thus, little information was available on how well a national program was progressing toward national goals. The Eisenhower Professional Development Program, National Tobacco Control Program, EPA’s Compliance Assistance, and CSREES provide financial assistance to states or regional offices with limited federal direction on activities or goals. Many decisions about who receives services and what services they receive are made largely at the regional, county, or school district levels. For example, in the Eisenhower Professional Development Program, districts select professional development activities to support their school reform efforts, including alignment with state and local academic goals and standards. These standards vary, some districts having more challenging standards than others. In addition, training may take various forms; participation in a 2-hour workshop is not comparable to involvement in an intensive study group or year-long course. Such differences in short-term goals, duration, and intensity make counting participating teachers an inadequate way to portray the national program. Such flexibility enables responsiveness to local conditions but reduces the availability of common measures to depict a program in its entirety. These programs also had limited federal reporting requirements. Cooperative extension and regional EPA offices are asked to report monitoring data on the number of workshops held and clients served, for example, but only selected information on results. The local extension offices are asked to periodically report to state offices monitoring data and accomplishments that support state-defined goals. The state offices, in turn, report to the federal office summary data on their progress in addressing state goals and how they fit into USDA’s national goals. The federal program may hold the state and local offices accountable for meeting their state’s needs but may have little summary information on progress toward achieving USDA’s national goals. Media campaigns base the selection of message, format, and frequency of broadcast advertisements on audience analysis to obtain access to a desired population. However, a campaign has no direct way of learning whether it has actually reached its intended audience. The mass media campaigns ONDCP and CDC supported had no personal contact with their youth audiences while they received messages from local radio, TV, and billboard advertisers. ONDCP campaign funds were used to purchase media time and space for advertisements that were expected to deliver two to three anti-drug messages a week using various types of media to the average youth or parent. However, the campaign did not automatically know what portions of the audience heard or paid any attention to the advertisements or, especially, changed their attitudes as a result of the advertisements. The instructional programs had the opportunity to interact with their audience and assess their knowledge, skills, and attitudes through questionnaires or observation. However, while knowledge and attitudes may change during a seminar, most desired behavior change is expected to take place when the people attending the seminar return home or to their jobs. Few of these programs had extended contact with their participants to observe such effects directly. In the Eisenhower program, a teacher can learn and report an intention to adopt a new teaching practice, but this does not ensure that the teacher will actually use it in class. End-of-session surveys asking for self-reports of participants’ knowledge, attitudes, and intended behavior are fast and convenient ways to gain information but can produce data of poor quality. This can lead to a false assessment of a workshop’s impact. Respondents may not be willing to admit to others that they engage in socially sensitive or stigmatizing activities like smoking or drug use. They may not trust that their responses will be kept confidential. In addition, they may choose to give what they believe to be socially desirable or acceptable answers in order to appear to be doing the “right thing.” When surveys ask how participants will use their learning, participants may feel pressured to give a positive but not necessarily truthful report. Participants may also report that they “understand” the workshop information and its message but may not be qualified to judge their own level of knowledge. Assessing a program’s intermediate behavioral outcomes, such as smoking, or long-term outcomes, such as improved health status, is hindered by the time they take to develop. To evaluate efforts to prevent youths from starting to smoke, evaluators need to wait several years to observe evidence of the expected outcome. ONDCP expects its media campaign to take about 2 to 3 years to affect drug use. Many population- based health effects take years to become apparent, far beyond the reach of these programs to study. Tracking participants over several years can be difficult and costly. Even after making special efforts to locate people who have moved, each year a few more people from the original sample may not be reached or may refuse to cooperate. In the Eisenhower evaluation, 50 percent of the initial sample (60 percent of teachers remaining in the schools) responded to all three surveys. When a sample is tracked for several years, the cumulative loss of respondents may eventually yield such a small proportion of the original sample as not to accurately represent that original sample. Moreover, the proportion affected tends to diminish at each step of the program logic model, which can reduce the size of the expected effect on long-term outcomes so small as to be undetectable. That is, if the program reached half the targeted audience, changed attitudes among half of those it reached, half of those people changed their behavior, and half of those experienced improved health outcomes, then only one-sixteenth of the initial target audience would be expected to experience the desired health outcome. Thus, programs may be unlikely to invest in tracking the very large samples required to detect an effect on their ultimate outcome. Attributing observed changes in participants to the effect of a program requires ruling out other plausible explanations. Those who volunteer to attend a workshop are likely to be more interested, knowledgeable, or willing to change their behavior than others who do not volunteer. Environmental factors such as trends in community attitudes toward smoking could explain changes in youths’ smoking rates. ONDCP planners have recognized that sensation seeking among youths is associated with willingness to take social or physical risks; high-sensation seekers are more likely to be early users of illegal drugs. Program participants’ maturing could also explain reductions in risky behavior over time. Other programs funded with private or other federal money may also strive for similar goals, making it difficult to separate out the information program’s unique contribution. The American Legacy Foundation, established by the 1998 tobacco settlement, conducted a national media campaign to discourage youths from smoking while Florida was carrying out its “truth” campaign. Similarly, the Eisenhower program is just one of many funding sources for teacher development, but it is the federal government’s largest investment solely in developing the knowledge and skills of classroom teachers. The National Science Foundation also funds professional development initiatives in mathematics and science. The evaluation found that local grantees combine Eisenhower grants with other funds to pay for conferences and workshops. The agencies we reviewed used a variety of strategies to address their evaluation challenges. Two flexible programs developed common, national measures, while two others promoted locally tailored evaluations. Most programs used exit or follow-up surveys to gather data on short-term and intermediate outcomes. Articulating a logic model for their programs helped some identify appropriate measures and strategies to address their challenges. Only EPA developed an approach for measuring its program’s long-term health and environmental outcomes or benefits. Most of the programs we reviewed assumed that program exposure or participation was responsible for observed changes and failed to address the role of external factors. However, the NIDA evaluation did use evaluation techniques to limit the influence of nonprogram factors. Table 1 displays the strategies the five cases used or recommended in guidance to address the challenges. Two of the four flexible programs developed ways to assess progress toward national program goals, while the others encouraged local programs to conduct their own evaluations, tailored to local program goals. EFNEP does not have a standard national curriculum, but local programs share common activities aimed at the same broad goals. A national committee of EFNEP educators developed a behavior checklist and food recall log to provide common measures of client knowledge and adoption of improved nutrition-related practices, which state and local offices may choose to adopt. The national program office provided state and local offices with software to record and analyze client data on these measures and produce tailored federal and state reports. In contrast, lacking standard reporting on program activities or client outcomes, the Eisenhower program had to conduct a special evaluation study to obtain such data. The evaluation contractor surveyed the state program coordinators to learn what types of training activities teachers were enrolled in and surveyed teachers to learn about their training experiences and practices. The evaluation contractor drew on characteristics identified with high-quality instruction in the research literature to define measures of quality for this study. In contrast, EPA and CDC developed guidance on how to plan and conduct program evaluations and encouraged state and local offices to assess their own individual efforts. To measure the effects of EPA’s enforcement and compliance assurance activities, the agency developed a performance profile of 11 sets of performance measures to assess the activities undertaken (including inspections and enforcement, as well as compliance assistance), changes in the behavior of regulated entities, and progress toward achieving environmental and health objectives. One set of measures targets the environmental or health effects of compliance assistance that must be further specified to apply to the type of assistance and relevant industry or sector. However, EPA notes that since the measured outcomes are very specific to the assistance tool or initiative, aggregating them nationally will be difficult. Instead, EPA encourages reporting the outcomes as a set of quantitative or qualitative accomplishments. In CDC’s National Tobacco Control Program, states may choose to conduct any of a variety of activities, such as health promotions, clinical management of nicotine addiction, advice and counseling, or enforcing regulations limiting the access minors have to tobacco. With such intentional flexibility and diversity, it is often difficult to characterize or summarize the effectiveness of the national program. Instead, CDC conducted national and multistate surveillance, providing both baseline and trend data on youths’ tobacco use, and encouraged states to evaluate their own programs, including surveying the target audience’s awareness and reactions. CDC’s “how to” guide assists program managers and staff in planning and implementing evaluation by providing general evaluation guidance that includes example outcomes—short term, intermediate, and long term—and data sources for various program activities or interventions. Both mass media campaigns surveyed their intended audience to learn how many heard or responded to the message and, thus, whether the first step of the program was successful. Such surveys, a common data source for media campaigns, involved carefully identifying the intended audience, selecting the survey sample, and developing the questionnaire to assess the intended effects. The National Youth Anti-Drug Media Campaign is designed to discourage youths from beginning to use drugs by posting advertisements that aim to change their attitudes about drugs and encourage parents to help prevent their children from using drugs. Thus, the NIDA evaluation developed a special survey, the National Survey of Parents and Youth (NSPY), with parallel forms to address questions about program exposure and effects on both groups. At the time of our interview, NSPY had fielded three waves of interviews to assess initial and cumulative responses to the campaign but planned additional follow-up. Cross-sectional samples of youths and parents (or caregivers) were drawn to be nationally representative and produce equal-sized samples within three age subgroups of particular interest (youths aged 9–11, 12–13, and 14–18). Separate questionnaires for youths and parents measured their exposure to both specific advertisements and, more generally, the campaign and other noncampaign anti-drug messages. In addition, they were asked about their beliefs, attitudes, and behavior regarding drug use and factors known to be related to drug use (for youths) or their interactions with their children (for parents). Florida’s tobacco control program integrated an advertisement campaign to counter the tobacco industry’s marketing with community involvement, education, and enforcement activities. The campaign disseminates its message about tobacco industry advertising through billboards and broadcasting and by distributing print media and consumer products (such as hats and T-shirts) at events for teenagers. Florida’s Anti-tobacco Media Evaluation surveys have been conducted every 6 months since the program’s inception in 1998 to track awareness of the campaign as well as youths’ anti-tobacco attitudes, beliefs, and smoking behavior. Most of the instructional programs we reviewed assessed participants’ short-term changes in knowledge, attitudes, or skills at the end of their session and relied on follow-up surveys to learn about intermediate effects that took place later. EFNEP and EPA’s Compliance Assistance, which had more extended contact with participants, were able to collect more direct information on intermediate behavioral effects. State cooperative extension and EPA evaluation guidance encouraged program staff to get immediate feedback on educational workshops, seminars, and hands-on demonstrations and their results. Reference materials suggested that postworkshop surveys ask what people think they gained or intend to do as a result of the program sessions. Questions may ask about benefits in general or perceived changes in specific knowledge, skills, attitudes, or intended actions. These surveys can show postprogram changes in knowledge and attitudes but not whether the participants actually changed their behavior or adopted the recommended practices. An extension evaluator said that this is the typical source of evaluation data for some types of extension programs. Cooperative extension evaluations have also used other types of on-site data collection, such as observation during workshops to document how well participants understood and can use what was taught. The traditional paper-and-pencil survey may be less effective with children or other audiences with little literacy, so other sources of data are needed. Program or evaluation staff can observe (directly or from documents) the use of skills learned in a workshop—for example, a mother’s explaining to another nonparticipating mother about the need to wash hands before food preparation. Staff can ask participants to role-play a scenario—for example, an 8-year-old’s saying “no” to a cigarette offered by a friend. These observations could provide evidence of knowledge, understanding of the skills taught, and ability to act on the message. While these data may be considered more accurate indicators of knowledge and skill gains than self-report surveys, they are more resource-intensive to collect and analyze. Most of the programs we reviewed expected the desired behavior change—the intermediate outcome—to take place later, after participants returned home or to their jobs. EFNEP is unusual in using surveys to measure behavior change at the end of the program. This is possible because (1) the program collects detailed information on diet, budgeting, and food handling from participants at the start and end of the program and (2) its series of 10 to 12 lessons is long enough to expect to see such changes. Programs that did not expect behavior to change until later or at work used follow-up surveys to identify actual change in behavior or the adoption of suggested practices. Cooperative extension and EPA’s Compliance Assistance evaluation guidance encouraged local evaluators to send a survey several weeks or months later, when participants are likely to have made behavior changes. Surveys may be conducted by mail, telephone, or online, depending on what appears to be the best way to reach potential respondents. An online survey of Web site visitors, for example, can potentially reach a larger number of respondents than may be known to the program or evaluator. EPA recommended that the form of evaluation follow-up match the form and intensity of the intervention, such as conducting a periodic survey of a sample of those who seek assistance of a telephone help-desk rather than following up each contact with an extensive survey. EPA and ONDCP officials noted that survey planning must accommodate a review by the Office of Management and Budget to ascertain whether agency proposals for collecting information comply with the Paperwork Reduction Act. EPA guidance encouraged evaluators to obtain administrative data on desired behavior changes rather than depending on less-reliable self-report survey data. Evidence of compliance can come from observations during follow-up visits to facilities that had received on-site compliance assistance or from tracking data that the audience may be required to report for regulatory enforcement purposes. For example, after a workshop for dry cleaners about the permits needed to meet air quality regulations, EPA could examine data on how many of the attendees applied for such permits within 6 months after the workshop. This administrative data could be combined with survey results to obtain responses from many respondents yet collect detailed information from selected participants. Using a survey at the end of a program session to gain information from a large number of people is fast and convenient, but self-reports may provide positively biased responses about the session or socially sensitive or controversial topics. To counteract these tendencies, the programs we reviewed used various techniques either to avoid threatening questions that might elicit a socially desirable but inaccurate response or to reassure interviewees of the confidentiality of their responses. In addition, the programs recommended caution in using self-reports of knowledge or behavior changes, encouraging evaluators—rather than participants—to assess change. Carefully wording questions can encourage participants to candidly record unpopular or negative views and can lessen the likelihood of their giving socially desirable responses. Cooperative extension evaluation guidance materials suggest that survey questions ask for both program strengths and weaknesses or for suggestions on how to improve the program. These materials also encourage avoidance of value-laden terms. Questions about potentially embarrassing situations might be preceded by a statement that acknowledges that this happens to everyone at some time. To reassure respondents, agencies also used the survey setting and administration to provide greater privacy in answering the questions. Evaluation guidance encourages collecting unsigned evaluation forms in a box at the end of the program, unless, of course, individual follow-up is desired. Because the National Youth Anti-Drug Media Campaign was dealing with much more sensitive issues than most surveys, its evaluation took several steps to reassure respondents and improve the quality of the data it collected. Agency officials noted that decisions about survey design and collecting quality data involve numerous issues such as consent, parental presence, feasibility, mode, and data editing procedures. In this case, they chose a panel study with linked data from youths and one parent or guardian collected over three administrations. In addition, they found that obtaining cooperation from a representative sample of schools with the frequency required by the evaluation was not feasible. So the evaluation team chose to survey households in person instead of interviewing youths at school or conducting a telephone survey. Hoping to improve the quality of sensitive responses, the surveyors promised confidentiality and provided respondents with a certificate of confidentiality from HHS. In addition, the sensitive questions were self- administered with a touch-screen laptop computer. All sensitive questions and answer categories appeared on the laptop screen and were spoken to the respondent by a recorded voice through earphones. Respondents chose responses by touching the laptop screen. This audio computer- assisted self-interview instrument was likely to obtain more honest answers about drug use, because respondents entered their reports without their answers being observed by the interviewer or their parents. NIDA reported that a review of the research literature on surveys indicated that this method resulted in higher reported rates of substance abuse for youths, compared to paper-and-pencil administration. State cooperative extension and EPA evaluation guidance cautioned that self-reports may not reflect actual learning or change; they encouraged local projects to directly test and compare participant knowledge before and after an activity rather than asking respondents to report their own changed behavior. Both the EFNEP and Eisenhower evaluators attempted to reduce social desirability bias in self-reports of change by asking for concrete, detailed descriptions of what the respondents did before and after the program. By asking for a detailed log of what participants ate the day before, EFNEP sought to obtain relatively objective information to compare with nutrition guidelines. By repeating this exercise at the beginning and end of the program, EFNEP obtained more credible evidence than by asking participants whether they had adopted desired practices, such as eating less fat and more fruit and vegetables. The Eisenhower evaluation also relied on asking about very specific behaviors to minimize subjectivity and potential bias. First, evaluators analyzed detailed descriptions of their professional development activities along characteristics identified as important to quality in prior research— such as length and level of involvement. Thus, they avoided asking teachers to judge the quality of their professional development activities. Second, teachers were surveyed at three points in time to obtain detailed information on their instructional practices during three successive school years. Teachers were asked to complete extensive tables on the content and pedagogy used in their course; then the evaluators analyzed whether these represented high standards and effective instructional approaches as identified in the research literature. The evaluators then compared teacher-reported instructional practices before and after their professional development training to assess change on key dimensions of quality. Some cooperative extension guidance noted that pretest-posttest comparison of self-report results may not always provide accurate assessment of program effects, because participants may have limited knowledge at the beginning of the program that prevents them from accurately assessing baseline behaviors. For example, before instruction on the sources of certain vitamins, participants may inaccurately assess the adequacy of their own consumption levels. The “post-then-pre” design can address this problem by asking participants to report at the end of the program, when they know more about their behavior, both then and as it was before the program. Evidently, participants may also be more willing to admit to certain inappropriate behaviors. Assessing long-term social or health outcomes that were expected to take more than 2 to 3 years to develop was beyond the scope of most of these programs. Only EPA developed an approach for measuring long-term outcomes, such as the environmental effects of desired behavior change in cases where they can be seen relatively quickly. In most instances, programs measured only short-term and intermediate outcomes, which they claimed would contribute to achieving these ultimate benefits. Several programs used logic models to demonstrate their case; some drew on associations established in previous research. The Eisenhower and NIDA evaluations took special effort to track participants long enough to observe desired intermediate outcomes. EFNEP routinely measures intermediate behavioral outcomes of improved nutritional intake but does not regularly assess long-term outcomes of nutritional or health status, in part because they can take many years to develop. Instead, the program relies on the associations established in medical research between diet and heart disease and certain cancers, for example, to explain how it expects to contribute to achieving disease- reduction goals. Specifically, Virginia Polytechnic Institute and State University (Virginia Tech) and Virginia cooperative extension staff developed a model to conduct a cost-benefit analysis of the health- promoting benefits of its EFNEP program. The study used equations estimating the health benefits of the program’s advocated nutritional changes for each of 10 nutrition-related diseases (such as colorectal cancer) from medical consensus reports. The study then used program data on the number of participants who adopted the whole set of targeted behaviors to calculate the expected level of benefits, assuming they maintained the behaviors for 5 years. EPA provided regional staff with guidance that allows them to estimate environmental benefits from pollution reduction in specific cases of improved compliance with EPA’s regulations. To capture and document the environmental results and benefits of concluded enforcement cases, EPA developed a form for regional offices to record their actions taken and pollutant reductions achieved. The guidance provides steps, formulas, and look-up tables for calculating pollutant reduction or elimination for specific industries and types of water, air, or solid waste regulations. EPA regional staff are to measure average concentrations of pollutants before a specific site becomes compliant and to calculate the estimated total pollutant reduction in the first year of postaction compliance. Where specific pollution-reduction measures can be aggregated across sites, EPA can measure effects nationally and show the contribution to agencywide pollution-reduction goals. In part because these effects occur in the short term, EPA was unique among our cases in having developed an approach for measuring the effects of behavior change. Logic models helped cooperative extension programs and the evaluation of ONDCP’s media campaign identify their potential long-term effects and the route through which they would be achieved. The University of Wisconsin Cooperative Extension guidance encourages the use of logic models to link investments to results. They aim to help projects clarify linkages among program components; focus on short-term, intermediate, and long-term outcomes; and plan appropriate data collection and analysis. The guidance suggests measuring outcomes over which the program has a fair amount of control and considering, for any important long-term outcome, whether it will be attained if the other outcomes are achieved. Figure 2 depicts a generic logic model for an extension project, showing how it can be linked to long-term social or environmental goals. The evaluation of the National Youth Anti-Drug Media Campaign followed closely the logic of how the program was expected to achieve its desired outcomes, and its logic models show how the campaign contributes to ONDCP’s drug-use reduction goals. For example, the campaign had specific hypotheses about the multiple steps through which exposure to the media campaign message would influence attitudes and beliefs, which would then influence behavior. Thus, evaluation surveys tapped various elements of youths’ attitudes and beliefs about drug use and social norms, as well as behaviors that are hypothesized to be influenced by—or to mediate the influence of—the campaign’s message. In addition, NIDA plans to follow for 2 to 3 years those who had been exposed to the campaign to learn how the campaign affected their later behavior. Figure 3 shows the multiple steps in the media campaign’s expected influence and how personal factors affect the process. Following program participants for years to learn about the effects on long-term outcomes for specific individuals exceeded the scope of most of these programs; only the formal evaluation studies of the Eisenhower and ONDCP programs did this. It can be quite costly to repeatedly survey a group of people or track individuals’ locations over time and may require several attempts in order to obtain an interview or completed survey. The Eisenhower evaluation employed a couple of techniques that helped reduce survey costs. First, the evaluation increased the time period covered by the surveys by surveying teachers twice in one year: first about their teaching during the previous school year and then about activities in the current school year. By surveying teachers in the following spring about that school year, the evaluators were able to learn about three school years in the space of 1-1/2 actual years. Second, the case study design helped reduce survey costs by limiting the number of locations the evaluation team had to revisit. Concentrating their tracking efforts in 10 sites also allowed the team to increase the sample of teachers and, thus, be more likely to detect small effects on teaching behavior. Most of the evaluations we reviewed assumed that program exposure or participation led to the observed behavioral changes and did not attempt to control the influence of external factors. However, in order to make credible claims that these programs were responsible for a change in behavior, the evaluation design had to go beyond associating program exposure with outcomes to rule out the influence of other explanations. NIDA’s evaluation used statistical controls and other techniques to limit the influence of other factors on attitudes and behaviors, while Eisenhower, CDC, and EPA encouraged assessment of the combined effect of related activities aimed at achieving the same goals. EFNEP’s evaluation approach paired program exposure with before-and- after program measures of outcomes to show a change that was presumed to stem from the program. Where the recommended behavior is very specific and exclusive to a program, it can be argued that the program was probably responsible for its adoption. An EFNEP program official explained that because program staff work closely with participants to address factors that could impede progress, they are comfortable using the data to assess their effectiveness. Many factors outside ONDCP’s media campaign were expected to influence youths’ drug use, such as other anti-drug programs and youths’ willingness to take risks, parental attitudes and behavior, peer attitudes and behavior, and the availability of and access to drugs. NIDA’s evaluation used several approaches to limit the effects of other factors on the behavioral outcomes it was reporting. First, to distinguish this campaign from other anti-drug messages in the environment, the campaign used a distinctive message to create a “brand” that would provide a recognizable element across advertisements in the campaign and improve recall of the campaign. The evaluation’s survey asked questions about recognition of this brand, attitudes, and drug use so the analysis could correlate attitudes and behavior changes with exposure to this particular campaign. Second, NIDA’s evaluation used statistical methods to help limit the influence of other factors on the results. The evaluation lacked a control group that was not exposed, since the campaign ran nationally, or baseline data on the audience’s attitudes before the campaign began, with which to compare the survey sample’s reaction. Thus, the evaluation chose to compare responses to variation in exposure to the campaign—comparing those with high exposure to those with low exposure—to assess its effects. This is called a dose-response design which assesses how risk of disease increases with increasing doses or exposure. This approach presumes that the advertisements were effective if you were more likely to adopt the promoted attitudes or behaviors as you saw more of them. However, because the audience rather than the evaluator determined how many advertisements they saw, it is not a random selection process, and other factors related to drug use may have influenced both audience viewing habits and drug-related attitudes and behaviors. To limit the influence of preexisting differences among the exposure groups on the results, the NIDA evaluation controlled for their influence by using a special statistical method called propensity scoring. This controls for any correlation between program exposure and risk factors for drug use, such as gender, ethnicity, strength of religious feelings, and parental substance abuse, as well as school attendance and participation in sensation-seeking activities. This statistical technique requires detailed data on large numbers of participants and sophisticated analysis resources. Some information campaigns are intertwined or closely associated with another program or activity aimed at the same goals. Both Eisenhower and the other programs fund teachers’ professional development activities that vary in quality, yet they found no significant difference in quality by funding source in their sample. So the evaluation focused instead on assessing the effect of high-intensity activities—regardless of funding source—on teaching practice. EPA’s Compliance Assistance program, for example, helps regulated entities comply with regulations along with its regulatory enforcement responsibilities—a factor not lost on the entities that are regulated. EPA’s dual role raises the question of whether any observed improvements in compliance result from assistance efforts or the implied threat of inspections and sanctions. EPA measures the success of its compliance assistance efforts together with those of incentives that encourage voluntary correction of violations to promote compliance and reductions in pollution. An alternative evaluation approach acknowledged the importance of combining information dissemination with other activities to the total program design and assessed the outcomes of the combined activities. This approach, exemplified by CDC and the public health community, encourages programs to adopt a comprehensive set of reinforcing media and regulatory and other community-based activities to produce a more powerful approach to achieving difficult behavior change. The proposed evaluations seek not to limit the influence of these other factors but to assess their combined effects on reducing tobacco use. CDC’s National Tobacco Control Program uses such a comprehensive approach to obtain synergistic effects, making moot the issue of the unique contribution of any one program activity. Figure 4 depicts the model CDC provided to help articulate the combined, reinforcing effects of media and other community-based efforts on reducing tobacco use. Agencies initiated most of these evaluation efforts in response to congressional interest and questions about program results. Then, collaboration with program partners and access to research results and evaluation expertise helped them carry out and increase the contributions of these evaluations. Congressional concern about program effectiveness resulted in two mandated evaluations and spurred agency performance assessment efforts in two others. The Congress encouraged school-based education reform to help students meet challenging academic standards with the Improving America’s Schools Act of 1994. Concerned about the quality of professional development to update teaching practices needed to carry out those reforms, the Congress instituted a number of far-reaching changes and mandated an evaluation for the Eisenhower Professional Development Program. The formal 3-year evaluation sought to determine whether and how Eisenhower-supported activities, which constitute the largest federal effort dedicated to supporting educator professional development, contribute to national efforts to improve schools and help achieve agency goals. The Congress has also been actively involved in the development and oversight of the National Youth Anti-Drug Media Campaign. It specified the program effort in response to nationwide rises in rates of youths’ drug use and mandated an evaluation of that effort. ONDCP was asked to develop a detailed implementation plan and a system to measure outcomes of success and report to the Congress within 2 years on the effectiveness of the campaign, based on those measurable outcomes. ONDCP contracted for an evaluation through NIDA to ensure that the evaluation used the best research design and was seen as independent of the sponsoring agency. ONDCP requested reports every 6 months on program effectiveness and impact. However, officials noted that this reporting schedule created unrealistically high congressional expectations for seeing results when the program does not expect to see much change in 6 months. Congressional interest in sharpening the focus of cooperative extension activities led to installing national goals that were to focus the work and encourage the development of performance goals. The Agricultural Research, Extension, and Education Reform Act of 1998 gave states authority to set priorities and required them to solicit input from various stakeholders. The act also encouraged USDA to address high-priority concerns with national or multistate significance. Under the act, states are required to develop plans of work that define outcome goals and describe how they will meet them. Annual performance reports are to describe whether states met their goals and to report their most significant accomplishments. CSREES draws on these reports of state outcomes to describe how they help meet USDA’s goals. State extension officials noted that the Government Performance and Results Act of 1993, as well as increased accountability pressures from their stakeholders, created a demand for evaluations. EFNEP’s performance reporting system was also initiated in response to congressional interest and is used to satisfy this latter act’s requirements. USDA staff noted that the House Committee on Agriculture asked for data in 1989 to demonstrate the impact of the program to justify the funding level. On the basis of questions from congressional staff, program officials and extension partners formed a national committee that examined the kinds of information that had already been gathered to respond to stakeholders and developed standard measures of desired client improvements. State reports are tailored to meet their information needs, while CSREES uses the core set of common behavioral items to provide accomplishments for USDA’s annual performance report. In several evaluations we reviewed, collaboration was reported as important for meeting the information needs of diverse audiences and expanding the usefulness of the evaluation. ONDCP’s National Youth Anti- Drug Media Campaign was implemented in collaboration with the Partnership for a Drug-Free America and a wide array of nonprofit, public, and private organizations to reinforce its message across multiple outlets. The National Institute on Drug Abuse, with input from ONDCP, designed the evaluation of the campaign and drew on an expert panel of advisers in drug abuse prevention and media studies. The evaluation was carried out by a partnership between Westat—bringing survey and program evaluation expertise—and the University of Pennsylvania’s Annenberg School for Communication—bringing expertise in media studies. Agency officials noted that through frequent communication with those developing the advertisements and purchasing media time, evaluators could keep the surveys up to date with the most recent airings and provide useful feedback on audience reaction. The Evaluation/Reporting System represented a collaborative effort among the federal and state programs to demonstrate EFNEP’s benefits. USDA staff noted that in the early 1990s, in response to congressional inquiries about EFNEP’s effectiveness, a national committee was formed to develop a national reporting system for data on program results. The committee held an expert panel with various USDA nutrition policy experts, arranged for focus groups, and involved state and county EFNEP representatives and others from across the country. The committee started by identifying the kinds of information the states had already gathered to respond to state and local stakeholders’ needs and then identified other questions to be answered. The committee developed and tested the behavior checklist and dietary analysis methodology from previous nutrition measurement efforts. The partnership among state programs continues through an annual CSREES Call for Questions that solicits suggestions from states that other states may choose to adopt. USDA staff noted that local managers helped design measures that met their needs, ensuring full cooperation in data collection and the use of evaluation results. State extension evaluator staff emphasized that collaborations and partnerships were an important part of their other extension programs and evaluations. At one level, extension staff partner with state and local stakeholders—the state natural resource department, courts, social service agencies, schools, and agricultural producers—as programs are developed and implemented. This influences whether and how the programs are evaluated—what questions are asked and what data are collected—as those who helped define the program and its goals have a stake in how to evaluate it. State extension evaluator staff also counted their relationships with their peers in other states as key partnerships that provided peer support and technical assistance. In addition to informal contacts, some staff were involved in formal multi-state initiatives, and many participate in a formal shared interest group of the American Evaluation Association. While we were writing our report, the association’s Extension Education Evaluation Topical Interest Group had more than 160 members, a Web site, and a listserv and held regular meetings (see http://www.danr.ucop.edu/eee-aea/). Using research helped agencies develop measures of program goals and establish links between program activities and short-term goals and between short-term and long-term goals. The Eisenhower evaluation team synthesized existing research on teacher instruction to develop innovative measures of the quality of teachers’ professional development activities, as well as the characteristics of teaching strategies designed to encourage students’ high-order thinking. EFNEP drew on nutrition research to develop standard measures for routine assessment and performance reporting. Virginia Tech’s cooperative extension program also drew on research on health care expenses and known risk factors for nutrition- related diseases to estimate the benefits of nutrition education on reducing the incidence and treatment costs of those diseases. Both the design of ONDCP’s National Anti-Drug Media Campaign and its evaluation drew on lessons learned in earlier research. The message and structure of the media campaign were based on a review of research evidence on the factors affecting youths’ drug use, effective drug-use prevention practices, and effective public health media campaigns. Agency officials indicated that the evaluation was strongly influenced by the “theory of reasoned action” perspective to explain behavioral change. This perspective assumes that intention is an important factor in determining behavior and that intentions are influenced by attitudes and beliefs. Exposure to the anti-drug messages is thus expected to change attitudes, intentions, and ultimately behavior. Similarly, CDC officials indicated that they learned a great deal about conducting and evaluating health promotion programs from their experience with HIV-AIDS prevention demonstration programs conducted in the late 1980s and early 1990s. In particular, earlier research on health promotions shaped their belief in the increased effectiveness of programs that combine media campaigns with other activities having the same goal. Several programs provided evaluation expertise to guide and encourage program staff to evaluate their own programs. The guidance encouraged them to develop program logic models to articulate program strategy and evaluation questions. Cooperative extension has evaluation specialists in many of the state land grant universities who offer many useful evaluation tools and guidance on their Web sites. (See the Bibliography for a list of resources.) CDC provided the rationale for how the National Tobacco Control Program addressed the policy problem (youths’ smoking) and articulated the conceptual framework for how the program activities were expected to motivate people to change their behavior. CDC supports local project evaluation with financial and technical assistance and a framework for program evaluation that provides general guidance on engaging stakeholders, evaluation design, data collection and analysis, and ways to ensure that evaluation findings are used. CDC also encourages grantees to allocate about 10 percent of their program budget for program monitoring (surveillance) and evaluation. (See www.cdc.gov/Tobacco/evaluation_manual/contents.htm). CDC, EPA, and cooperative extension evaluation guidance all encouraged project managers to create program logic models to help articulate their program strategy and expected outcomes. Logic models characterize how a program expects to achieve its goals; they link program resources and activities to program outcomes and identify short-term and long-term outcome goals. CDC’s recent evaluation guidance suggests that grantees use logic models to link inputs and activities to program outcomes and also to demonstrate how a program connects to the national and state programs. The University of Wisconsin Cooperative Extension evaluation guidance noted that local projects would find developing the program logic model to be useful in program planning, identifying measures, and explaining the program to others. The agencies whose evaluations we studied employed a variety of strategies for evaluating their programs’ effects on short-term and intermediate goals but still had difficulty assessing their contributions to long-term agency goals for social and environmental benefits. As other agencies are pressed to demonstrate the effectiveness of their information campaigns, the examples in this report might help them identify how to successfully evaluate their programs’ contributions. Several agencies drew on existing research to identify common measures; others may find that analysis of the relevant research literature can aid in designing a program evaluation. Previous research may reveal useful existing measures or clarify the expected influence of the program, as well as external factors, on its goals. Agencies might also benefit from following the evaluation guidance that has recommended developing logic models that specify the mechanisms by which programs are expected to achieve results, as well as the specific short-term, intermediate, and long-term outcomes they are expected to achieve. A logic model can help identify pertinent variables and how, when, and in whom they should be measured, as well as other factors that might affect program results. This, in turn, can help set realistic expectations about the scope of a program’s likely effects. Specifying a logical trail from program activities to distant outcomes pushes program and evaluation planners to articulate the specific behavior changes and long-term outcomes they expect, thereby indicating the narrowly defined long-term outcomes that could be attributed to a program. Where program flexibility allows for local variation but risks losing accountability, developing a logic model can help program stakeholders talk about how diverse activities contribute to common goals and how this might be measured. Such discussion can sharpen a program’s focus and can lead to the development of commonly accepted standards and measures for use across sites. In comprehensive initiatives that combine various approaches to achieving a goal, developing a logic model can help articulate how those approaches are intended to assist and supplement one another and can help specify how the information dissemination portion of the program is expected to contribute to their common goal. An evaluation could then assess the effects of the integrated set of efforts on the desired long-term outcomes, and it could also describe the short-term and intermediate contributions of the program’s components. The agencies provided no written comments, although EPA, HHS, and USDA provided technical comments that we incorporated where appropriate throughout the report. EPA noted that the Paperwork Reduction Act requirements pose an additional challenge in effectively and efficiently measuring compliance assistance outcomes. We included this point in the discussion of follow-up surveys. We are sending copies of this report to other relevant congressional committees and others who are interested, and we will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have questions concerning this report, please call me or Stephanie Shipman at (202) 512-2700. Elaine Vaurio also made key contributions to this report. Centers for Disease Control and Prevention, Office of Smoking and Health. Best Practices for Comprehensive Tobacco Control Programs. Atlanta, Ga.: August 1999. http://www.cdc.gov/tobacco/bestprac.htm (September 2002). Garet, Michael S., and others. Designing Effective Professional Development: Lessons from the Eisenhower Program. Document 99-3. Washington, D.C.: U.S. Department of Education, Planning and Evaluation Service, December 1999. http://www.ed.gov/inits/teachers/eisenhower/ (September 2002). Hornik, Robert, and others. Evaluation of the National Youth Anti-Drug Media Campaign: Historical Trends in Drug Use and Design of the Phase III Evaluation. Prepared for the National Institute on Drug Abuse. Rockville, Md.: Westat, July 2000. http://www.whitehousedrugpolicy.gov/publications (September 2002). Hornik, Robert, and others. Evaluation of the National Youth Anti-Drug Media Campaign: Third Semi-Annual Report of Findings. Prepared for the National Institute on Drug Abuse. Rockville, Md.: Westat, October 2001. http://www.mediacampaign.org/publications/index.html (September 2002). Kiernan, Nancy Ellen. “Reduce Bias with Retrospective Questions.” Penn State University Cooperative Extension Tipsheet 30, University Park, Pennsylvania, 2001. http://www.extension.psu.edu/evaluation/ (September 2002). Kiernan, Nancy Ellen. “Using Observation to Evaluate Skills.” Penn State University Cooperative Extension Tipsheet 61, University Park, Pennsylvania, 2001. http://www.extension.psu.edu/evaluation/ (September 2002). MacDonald, Goldie, and others. Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Atlanta, Ga.: Centers for Disease Control and Prevention, November 2001. http://www.cdc.gov/tobacco/evaluation_manual/contents.htm (September 2002). Office of National Drug Control Policy. The National Youth Anti-Drug Media Campaign: Communications Strategy Statement. Washington, D.C.: Executive Office of the President, n.d. http://www.mediacampaign.org/publications/index.html (September 2002). Ohio State University Cooperative Extension. Program Development and Evaluation. http://www.ag.ohio-state.edu/~pde/ (September 2002). Penn State University. College of Agricultural Sciences, Cooperative Extension and Outreach, Program Evaluation http://www.extension.psu.edu/evaluation/ (September 2002). Porter, Andrew C., and others. Does Professional Development Change Teaching Practice? Results from a Three-Year Study. Document 2000-04. Washington, D.C.: U.S. Department of Education, Office of the Under Secretary, October 2000. http://www.ed.gov/offices/OUS/PES/school_improvement.html#subepdp2 (September 2002). Rockwell, S. Kay, and Harriet Kohn. “Post-Then-Pre Evaluation.” Journal of Extension 27:2 (summer 1989). http://www.joe.org/joe/1989summer/a5.html (September 2002). Taylor-Powell, Ellen. “The Logic Model: A Program Performance Framework.” University of Wisconsin Cooperative Extension, Madison, Wisconsin, 62 pages, n.d. http://www.uwex.edu/ces/pdande (September 2002). Taylor-Powell, Ellen, and Marcus Renner. “Collecting Evaluation Data: End-of-Session Questionnaires.” University of Wisconsin Cooperative Extension document G3658-11, Madison, Wisconsin, September 2000. http://www.uwex.edu/ces/pdande (September 2002). Taylor-Powell, Ellen, and Sara Steele. “Collecting Evaluation Data: Direct Observation.” University of Wisconsin Cooperative Extension document G3658-5, Madison, Wisconsin, 1996. http://www.uwex.edu/ces/pdande/evaluation/evaldocs.html (September 2002). U.S. Department of Agriculture, Expanded Food and Nutrition Education Program. EFNEP 2001 Program Impacts Booklet. Washington, D.C.: June 2002. http://www.reeusda.gov/f4hn/efnep/factsheet.htm (September 2002). U.S. Department of Agriculture, Expanded Food and Nutrition Education Program. ERS4 (Evaluation/Reporting System). Washington, D.C.: April 9, 2001. http://www.reeusda.gov/ers4/home.htm (September 2002). U.S. Department of Agriculture, Expanded Food and Nutrition Education Program. Virginia EFNEP Cost Benefit Analysis. Fact Sheet. Washington, D.C.: n.d. http://www.reeusda.gov/f4hn/efnep.htm (September 2002). U.S. Environmental Protection Agency, Office of Enforcement and Compliance Assurance. Guide for Measuring Compliance Assistance Outcomes. EPA300-B-02-011. Washington, D.C.: June 2002. http://www.epa.gov/compliance/planning/results/tools.html (September 2002). U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. “Framework for Program Evaluation in Public Health.” Morbidity and Mortality Weekly Report 48:RR-11 (1999). (September 2002). University of Wisconsin Cooperative Extension. Program Development and Evaluation, Evaluation. http://www.uwex.edu/ces/pdande/evaluation/index.htm (September 2002). American Evaluation Association, Extension Education Evaluation Topical Interest Group http://www.danr.ucop.edu/eee-aea/ (September 2002). CYFERnet. Children, Youth, and Families Education and Research Network. Evaluation Resources http://twosocks.ces.ncsu.edu/cyfdb/browse_2.php?search=Evaluation (September 2002). Schwarz, Norbert, and Daphna Oyserman. “Asking Questions about Behavior: Cognition, Communication, and Questionnaire Construction.” American Journal of Evaluation 22:2 (summer 2001): 127–60. Southern Regional Program and Staff Development Committee. “Evaluation and Accountability Resources: A Collaboration Project of the Southern Region Program and Staff Development Committee.” Kentucky Cooperative Extension Service. http://www.ca.uky.edu/agpsd/soregion.htm (September 2002). Program Evaluation: Studies Helped Agencies Measure or Explain Program Performance. GAO/GGD-00-204. Washington, D.C.: September 29, 2000. Anti-Drug Media Campaign: ONDCP Met Most Mandates, but Evaluations of Impact Are Inconclusive. GAO/GGD/HEHS-00-153. Washington, D.C.: July 31, 2000. Managing for Results: Measuring Program Results That Are under Limited Federal Control. GAO/GGD-99-16. Washington, D.C.: December 11, 1998. Grant Programs: Design Features Shape Flexibility, Accountability, and Performance Information. GAO/GGD-98-137. Washington, D.C.: June 22, 1998. Program Evaluation: Agencies Challenged by New Demand for Information on Program Results. GAO/GGD-98-53. Washington, D.C.: April 24, 1998. Managing for Results: Analytic Challenges in Measuring Performance. GAO/HHS/GGD-97-138. Washington, D.C.: May 30, 1997. Program Evaluation: Improving the Flow of Information to the Congress. GAO/PEMD-95-1. Washington, D.C.: January 30, 1995. Designing Evaluations. GAO/PEMD-10.1.4. Washington, D.C.: May 1991.
Federal agencies are increasingly expected to focus on achieving results and to demonstrate, in annual performance reports and budget requests, how their activities will help achieve agency or governmentwide goals. Assessing a program's impact or benefit is often difficult, but the dissemination programs GAO reviewed faced a number of evaluation challenges--either individually or in common. The breadth and flexibility of some of the programs made it difficult to measure national progress toward common goals. The programs had limited opportunity to see whether desired behavior changes occurred because change was expected after people made contact with the program, when they returned home or to work. The five programs GAO reviewed addressed these challenges with a variety of strategies, assessing program effects primarily on short-term and intermediate outcomes. Two flexible programs developed common measures to conduct nationwide evaluations; two others encouraged communities to tailor local evaluations to their own goals. Congressional interest was key to initiating most of these evaluations; collaboration with program partners, previous research, and evaluation expertise helped carry them out. Congressional concern about program effectiveness spurred two formal evaluation mandates and other program activities. Collaborations helped ensure that an evaluation would meet the needs of diverse stakeholders.
transportation systems can be used as weapons themselves as was done on September 11, 2001. As we indicated in our June 2003 report on transportation security challenges, transportation experts, state and local governments, and industry representatives generally believe that investing in transportation security R&D is the federal government’s responsibility. After the September 11, 2001, terrorist attacks, Congress enacted legislation that resulted in changes in the federal organization and funding for transportation security R&D. In November 2001, the Aviation and Transportation Security Act created TSA within DOT and transferred the Federal Aviation Administration’s (FAA) aviation security R&D program to TSA. The act also required TSA to meet a December 31, 2002, deadline for deploying explosives detection systems to screen all checked baggage. One year later, the Homeland Security Act created DHS and transferred TSA from DOT to DHS. This legislation also transferred to DHS several other agencies that conducted transportation security R&D, including the U.S. Customs Service (now part of U.S. Customs and Border Protection) and the U.S. Secret Service from the Department of the Treasury and the U.S. Coast Guard from DOT. In addition, the Homeland Security Act extended the deadline for deploying new checked baggage screening equipment for certain airports to December 31, 2003, and transferred certain chemical and biological research programs that have potential transportation security applications from the Department of Defense and DOE to DHS. Although TSA and DHS have their own research facilities, most of their transportation security R&D is conducted by contractors. Figure 1 identifies major events in the establishment of TSA and DHS. Under the Aviation and Transportation Security Act, TSA is required to secure all modes of transportation; coordinate transportation security countermeasures with other federal accelerate the research, development, testing, and evaluation of explosives detection technology for checked baggage and of new technology to screen for threats in carry-on items and other items being loaded onto aircraft, including cargo, and on persons. TSA’s Office of Security Technologies is responsible for the research, development, testing, and deployment of security technology countermeasures employed to protect the transportation system against criminal and terrorist threats. It organizes its R&D projects according to the different approaches through which threats can reach a target, such as on a person; in carry-on items, vehicles, checked baggage, or cargo; or through access points at airports or at marine ports. The Office of Security Technologies operates the Transportation Security Laboratory, located in Atlantic City, New Jersey, which conducts transportation security R&D and tests products submitted by potential vendors for compliance with TSA standards. Although FAA’s aviation security R&D program was moved to TSA and TSA has since initiated R&D related to other modes of transportation, several DOT administrations conducted transportation security R&D before TSA was created and continue to do so. However, security is not the primary focus of DOT’s R&D programs. The Homeland Security Act brought 22 separate federal agencies under DHS’s umbrella and provided a framework for organizing DHS into five directorates, giving the Science and Technology Directorate responsibility for DHS’s research, development, testing, and evaluation activities and the Border and Transportation Security Directorate responsibility for security along the nation’s borders and in all modes of transportation. The act also requires TSA to remain a distinct entity within the Border and Transportation Security Directorate until November 25, 2004. Consequently, TSA’s R&D program office—the Office of Security Technologies—currently operates outside of DHS’s Science and Technology Directorate. Under the Homeland Security Act, DHS’s Information Analysis and Infrastructure Protection Directorate is required to prepare risk assessments of the nation’s key resources and critical infrastructure,which includes transportation. In addition, the Homeland Security Act requires the Science and Technology Directorate to coordinate with the appropriate executive branch agencies in developing and carrying out the science and technology agenda of the department to reduce duplication and identify unmet needs; accelerate the prototyping and development of technologies to address homeland security vulnerabilities; and coordinate and integrate all research, development, demonstration, testing, and evaluation activities of the department. engaging industry, academia, government, and other sectors in R&D, rapid prototyping, and technology transfer. The Office of Systems Engineering and Development takes technologies developed by the Office of Research and Development or HSARPA and prepares deployment strategies to transfer technologies to federal, state, and/or local government users. As the primary federal agencies responsible for enhancing the security of all modes of transportation, in fiscal year 2003, TSA spent about $21 million and DHS spent about $26 million on transportation security R&D projects; for fiscal year 2004, TSA and DHS have budgeted about $159 million and $88 million, respectively. In addition, DOT spent about $8 million on transportation security R&D projects in fiscal year 2003 and has budgeted about $31 million for fiscal year 2004. NASA did not fund any transportation security R&D projects in fiscal year 2003 but has budgeted about $18 million for aviation security R&D projects during fiscal year 2004. TSA and DHS were not able to estimate deployment dates for the vast majority of projects that they funded in fiscal years 2003 and 2004. Although TSA and DHS have not decided what additional projects they will fund in fiscal year 2005 and beyond, the President’s fiscal year 2005 budget requests $154 million for TSA’s R&D program and about $1 billion for the Science and Technology Directorate, which includes some transportation security R&D. Overall, members of our panel of transportation security and technology experts had mixed views about the reasonableness of the distribution of transportation security R&D projects by mode and raised questions about the types of projects that were funded and not funded by TSA and DHS. Overall, TSA increased its funding for transportation security R&D from $21 million in fiscal year 2003 to $159 million in fiscal year 2004, as shown in table 2. Although TSA is responsible for addressing the security needs of all modes of transportation, in fiscal year 2003, TSA spent about $17 million, or about 81 percent, of its R&D funding for projects related to aviation security. For fiscal year 2004, TSA has budgeted about $126 million on aviation security, or about 79 percent of its R&D budget. This increase reflects, in part, a $55 million appropriation for R&D related to air cargo screening. According to TSA, it has spent the majority of its R&D funding on aviation security because aviation was the greatest concern following the September 11, 2001, terrorist attacks and because Congress directed TSA to use R&D funding to enhance aviation security. In fiscal year 2004, TSA increased its budget for multimodal R&D projects from about $4 million in fiscal year 2003 to about $22 million. This increase is due, in part, to a $5.6 million increase for the Manhattan II project and about $6.4 million for development of a walk-through trace portal for detecting explosives on aviation, maritime, and rail passengers. In fiscal year 2004, TSA also increased its budget for rail security R&D projects from $169,000 in fiscal year 2003 to about $1.1 million. This increase reflects the $1.1 million that was budgeted for the Transit and Rail Inspection Pilot (TRIP).TSA also increased maritime security R&D funding from zero in fiscal year 2003 to about $9 million in fiscal year 2004; this increase is due, in part, to $3.6 million for a project to develop equipment to screen vehicles on ferries. Finally, TSA did not spend any money for highway, pipeline, or transit R&D projects. Several members of our panel of transportation security and technology experts commented that R&D for rail and transit security warrants additional funding. Congress is considering legislation to increase funding for these as well as other modes of transportation in fiscal year 2005. For example, the Rail Security Act, S. 2273, which has been passed by the Senate Committee on Commerce, Science, and Transportation, would authorize $50 million in each of fiscal years 2005 and 2006 for an R&D program for improving freight and intercity passenger rail security. computed tomography explosives detection system that is smaller and lighter than systems currently deployed in airport lobbies. The new system is intended to replace the systems currently placed in airport lobbies, including both larger, heavier explosives detection systems and explosives trace detection equipment. The smaller size of the system creates opportunities for TSA to transfer screening operations to other locations, such as airport check-in counters. TSA expects to certify this equipment later this year. TSA is also working with a contractor to integrate technologies, such as quadrupole resonance, with its existing explosives detection systems to improve processing speed and detection capability and to reduce false alarm rates and human resource requirements. Aviation Checkpoint: To address the limitations of its current metal detectors for screening passengers and of X-ray machines for screening carry-on baggage, TSA, in fiscal year 2003, obligated about $1 million and has budgeted $18 million for fiscal year 2004. For example, during the summer of 2004, TSA installed and began testing explosives trace detection portals at four airports and had scheduled to test the portal at a fifth airport in the near future. Passengers who enter a checkpoint lane with a trace portal machine will proceed through the metal detector while their carry-on baggage is being screened by X-ray. Each passenger will then be asked to step into the trace portal and to stand still for a few seconds while several quick puffs of air are released, as shown in figure 2. The portal will analyze the air for traces of explosives as the passenger walks through, and a computerized voice will tell the passenger when to exit the portal. To help focus its screening resources on the highest risk passengers, in fiscal years 2003 and 2004, TSA worked to develop the Computer Assisted Passenger Prescreening System II (CAPPS II). CAPPS II is intended to identify terrorists and other high-risk individuals before they board commercial airplanes. Originally, TSA intended to conduct a risk assessment of each passenger using national security information, commercial databases, and information provided by the passenger during the reservation process—specifically, the passenger’s name, date of birth, home address, and home telephone number. In our February 2004 report on CAPPS II, we found that TSA was behind schedule in testing and developing initial increments of CAPPS II and had not yet completely addressed other issues, including concerns about privacy and the accuracy of the data used for CAPPS II. In August 2004, a DHS official said that DHS was revising the program with an emphasis on fully protecting passengers’ privacy and civil liberties. Aviation Cargo: To enhance the security of the nation’s air cargo system, TSA obligated about $700,000 in fiscal year 2003 for cargo security R&D and has budgeted about $53 million for fiscal year 2004. For example, as part of its Air Cargo Strategic Plan, TSA plans to develop a prescreening system to identify high-risk cargo and to work with the appropriate stakeholders to ensure that all such cargo is inspected. To complete its inspection of high-risk cargo, TSA has a number of R&D projects, one of which is a project budgeted at $19.5 million for fiscal year 2004 to research and develop equipment for the detection of threats in containerized air cargo and mail. Under this project, TSA is considering funding several technologies, including high­ power computed tomography and X-ray combined with pulsed fast neutron analysis. In its July 2004 report, the National Commission on Terrorist Attacks Upon the United States expressed concerns about checked baggage, checkpoint, and cargo security.The commission recommended that TSA and Congress give priority attention to improving the ability of screening checkpoints to detect explosives on passengers. The commission also stated that TSA should (1) expedite the installation of advanced in-line baggage screening equipment; (2) require that every passenger aircraft carrying cargo deploy at least one hardened container to carry any suspect cargo; and (3) intensify its efforts to identify, track, and appropriately screen potentially dangerous cargo in both aviation and maritime modes. In addition to its R&D projects to enhance aviation security, in fiscal years 2003 and 2004, TSA spent or budgeted R&D funds for projects to improve security for maritime and land transportation, including the following: The Transit and Rail Inspection Pilot will assess the feasibility of using emerging technologies to screen passengers and their checked baggage and carry-on items for explosives at rail stations and aboard trains. In May 2004, TSA completed a 30-day test to screen Amtrak and commuter rail passengers for explosives at a Maryland train station by having them walk through a trace detection portal that TSA is also considering for use at airports. According to TSA officials, the test provided useful information about customer-screening wait times, the effectiveness of screening equipment in a non-climate-controlled environment, and the cost and impact of using the technology for Amtrak and commuter rail operations. In addition, in June and July 2004, TSA tested the screening of Amtrak passengers’ checked baggage for explosives at a Washington, D.C., train station, and in July 2004, TSA tested the screening of passengers and their carry-on items for explosives on a Connecticut commuter rail train while the train was in motion. The Transportation Worker Identification Credential is intended to establish a uniform, nationwide standard for the secure identification of as many as 12 million public- and private-sector workers who require unescorted physical or cyber access to secure areas at airports and other transportation facilities, such as seaports and railroad terminals. TSA was not able to provide funding information for the program for fiscal years 2003 and 2004. As we have previously reported, airport and seaport officials have expressed concern about how much the program would cost and who would pay to implement it. We have recently completed a separate review that looked at pilot tests of the program at maritime ports and expect to issue a report to the House Transportation and Infrastructure Committee by September 30, 2004. The Conveyance Tracking Program is investigating the capability of technologies that are or are nearly available for the secure tracking of hazardous materials shipments by rail and truck. TSA budgeted about $1 million for this program for fiscal year 2004. Operation Safe Commerce is designed to improve container supply chain security by testing practices and commercially available technologies in an operational environment, including technologies for tracking and tracing containers, nonintrustive detection of threats, and sealing containers. In June 2003, TSA awarded grants to the ports of Los Angeles and Long Beach, California; Seattle and Tacoma, Washington; and the Port Authority of New York and New Jersey. TSA was not able to provide funding information for the program for fiscal years 2003 and 2004. For our review, we classified R&D projects according to the following four phases: Basic research includes all scientific efforts and experimentation directed toward increasing knowledge and understanding in those fields of physical, engineering, environmental, social, and life sciences related to long-term national needs. Applied research includes all efforts directed toward the solution of specific problems with a view toward developing and evaluating the feasibility of proposed solutions. Advanced development includes all efforts directed toward projects that have moved into the development of hardware for field experiments and tests. Operational testing includes the evaluation of integrated technologies in a realistic operating environment to assess the performance or cost reduction potential of advanced technology. although it typically entails higher risks, it also offers higher payoffs than R&D in later phases. Thus far, TSA has focused its R&D efforts on making improvements to deployed technologies and testing and evaluating near­ term technologies, and a senior TSA official acknowledged that the agency needs to do more basic research. Although many of TSA’s projects are in later phases of development, the agency has not estimated deployment dates for 133 of the 146 projects that it funded in fiscal years 2003 and 2004. According to TSA officials, deployment dates are not always predictable because deployment is dependent on factors such as the manufacturing capacity of the private sector or the availability of funds for purchasing and installing equipment. However, we generally believe that R&D program managers should estimate deployment dates for projects that are beyond the basic research phase because deployment dates can serve as goals that the managers can use to plan, budget, and track the progress of projects. For the 13 projects for which TSA had estimated deployment dates, deployment is scheduled for fiscal years 2004 through 2014. Nine of the 13 projects are scheduled for deployment in fiscal years 2005 or 2006, including the Phoenix project, which is intended to enhance existing checked baggage screening systems and develop new screening technologies. One of the remaining 4 projects, the Manhattan II project, is scheduled for deployment from fiscal years 2009 through 2014. Progress on some R&D projects was delayed in fiscal year 2003 when TSA transferred about $61 million, more than half of its $110 million R&D appropriation, to operational needs, such as personnel cost for screeners. As a result, TSA delayed several key R&D projects related to checked baggage screening, checkpoint screening, and air cargo security. For example, TSA delayed the development of a device to detect weapons, liquid explosives, and flammables in containers found in carry-on baggage or passengers’ effects, as well as the development and testing of a walk­ through portal for detecting traces of explosives on passengers. According to a TSA official, the agency does not plan to transfer R&D funds to other programs in fiscal year 2004. Overall, DHS increased its funding for transportation security R&D from about $26 million in fiscal year 2003 to about $88 million in fiscal year 2004, as shown in table 3. The President’s fiscal year 2005 budget request includes about $1 billion for the Science and Technology Directorate, which includes some transportation security R&D. In fiscal year 2003, DHS spent $12.6 million, or almost half, of its $26 million transportation security R&D budget for projects related to multiple modes of transportation. For fiscal year 2004, DHS increased its budget for multimodal projects to $20 million; this increase reflects the costs of funding pilot programs with the Port Authority of New York and New Jersey to test radiation and nuclear detection devices. For fiscal year 2004, DHS budgeted almost $63 million, or 72 percent of its $88 million, on aviation projects, compared with almost $4 million spent in fiscal year 2003. This increase provides about $60 million in fiscal year 2004 funds to develop technical countermeasures to minimize the threat posed to commercial aircraft by shoulder-fired missiles, also known as man-portable air defense systems (MANPADS). Figure 4 shows a MANPADS that could be used to attack a commercial aircraft. DHS decreased its budget for transit security R&D projects from $5 million in fiscal year 2004 to $0 in fiscal year 2004; this decrease reflects the completion of a project to test chemical detectors in subway stations. DHS also increased its budget for highway security R&D projects from $1 million in fiscal year 2003 to $3 million in fiscal year 2004. This increase funds a project to research and develop technology for detecting truck bombs. Figure 5 shows an example of a truck bomb detection system. and, according to a senior DHS official, intends to do more basic research in fiscal year 2006 and beyond. Of the 56 projects that DHS funded in fiscal years 2003 and 2004, DHS has deployed technologies related to 7, has estimated deployment dates for 11, and has not estimated deployment dates for the remaining 38. Estimated deployment dates for the 11 projects range from fiscal years 2004 to 2007. In addition to the transportation security R&D projects funded by TSA and DHS, DOT and NASA funded some such projects. In fiscal year 2003, DOT spent about $8 million and has budgeted about $31 million for fiscal year 2004 on transportation security R&D, as shown in table 4. For example, in fiscal year 2003, DOT spent about $2 million to develop and field-test a system to track trailers containing hazardous materials when they are not attached to a tractor; for fiscal year 2004, it budgeted $20 million to develop a secure information network to share air traffic control information with DHS and others. Although NASA did not fund any transportation security R&D in fiscal year 2003, it has budgeted about $18 million for fiscal year 2004 for aviation security R&D projects. For example, NASA budgeted about $5 million for technologies and methods to provide accurate information so that pilots can avoid protected airspace, continually verify identity, and prevent unauthorized persons from gaining access to flight controls. Members of our panel of transportation security and technology experts had mixed views on whether the distribution of transportation security R&D projects by mode was reasonable and raised questions about whether some projects should be funded. According to several panelists, the distribution of transportation security R&D projects by mode and program area was reasonable. However, several other panelists said that aviation has been overemphasized at the expense of maritime and land modes; two panelists felt that R&D is focused too heavily on threats that were prominent in the 1970s and 1980s, such as airplane hijackings and bombings; and one panelist said that the selection of projects seemed to be inappropriately based on the most recent terrorist event or perceived threat. While the panelists had different and sometimes conflicting views about the reasonableness of the distribution of projects, many of them said that project selections should be based on current risk assessments. As explained in the next section of this report, TSA and DHS plan to select their R&D projects on the basis of risk assessments, which have not yet been completed for all modes of transportation. When asked whether they thought there were any transportation security R&D projects in the agencies’ portfolios that did not merit funding, the panelists identified several funded by TSA that they believed did not qualify as R&D projects. For example, one panelist did not agree with funding projects that were designed to enhance existing technologies, such as a $30,000 project to test a prototype of a new, handheld ion mobility spectrometry explosives trace detector. According to this panelist, at least two very good ion mobility spectrometry handheld units can be purchased off the shelf. In commenting on a draft of this report, DHS said that TSA funded this project because the vendor demonstrated a promising technology. explosives might be concealed in containers. A ground-based system to scan trucks carrying cargo bound for passenger aircraft, ships, and highways could also be tested. A multifunctional portal that tests for metals, explosives, narcotics, and chemicals in near real time could help to address the limitations of current checkpoint screening equipment. A standard piece of luggage for testing deployed explosives detection systems could be developed to ensure that the systems maintain acceptable performance capabilities. In commenting on a draft of this report, DHS addressed several technologies and projects, including neutron inspection technology, a multifunctional portal project, and a project to develop a standard piece of luggage for testing explosives detection systems. Specifically, DHS said that TSA is looking at pulsed fast neutron analysis, a technology that uses X-ray images in conjunction with neutron interrogation and substance identification. According to DHS, TSA considers the development of a multifunctional portal critical because it creates opportunities for fusing or integrating technologies—a long-standing transportation goal. Finally, DHS said that a standard piece of luggage had been developed to validate the performance of two different explosives detection systems to ensure that the systems are performing to their certification levels. Moreover, DHS noted in its comments that TSA has two advisory committees—the National Academy of Sciences and the Security Advisory Panel—whose members have expertise in various modes of transportation. TSA and DHS have made some progress in managing their transportation security R&D programs according to applicable laws and R&D best practices, but their efforts are incomplete in the following areas: preparing strategic plans that contain goals and measurable objectives, preparing and using risk assessments to select and prioritize their R&D maintaining a comprehensive database of R&D projects, coordinating their R&D programs with those of other government reaching out to transportation stakeholders to help identify R&D needs, accelerating R&D. The Homeland Security Act also authorizes DHS to solicit R&D proposals for security technologies from outside entities and requires DHS to integrate the department’s R&D programs. Although the laws do not contain deadlines for TSA and DHS to complete these requirements, it is difficult to determine, until the agencies do, whether they are making R&D investments cost-effectively and addressing the highest transportation risks. In commenting on their progress in managing TSA’s R&D program, TSA officials said that the agency was focusing initially on hiring new airport screeners and meeting statutory requirements to install new screening equipment. They further noted that a substantial transfer of R&D funds in fiscal year 2003 delayed certain projects. DHS officials said that the department is a start-up organization. Table 5 shows the progress TSA and DHS have made in complying with statutory requirements and best practices for managing their R&D programs. The Homeland Security Act requires DHS to prepare a strategic plan that identifies goals and includes annual measurable objectives for coordinating the federal government’s civilian efforts in developing countermeasures to terrorist threats. Similarly, R&D best practices identified by the National Academy of Sciences indicate that research programs should be described in strategic and performance plans and evaluated in performance reports. TSA has prepared strategic plans for both the agencyand its R&D program that contain performance goals, such as deterring foreign and domestic terrorists and other individuals from causing harm or disrupting the nation’s transportation system. Although we reported in January 2003 that TSA had established an initial set of 32 performance measures, none of them are contained in TSA’s strategic plans or directly pertain to R&D. DHS has prepared a strategic plan for the department, but the plan's broad objective—to develop technology and capabilities to detect and prevent terrorist attacks—is not supported by more specific R&D performance goals and measures in any program area, including transportation. A DHS official said that the department is preparing a separate strategic plan for its R&D program that will include more specific goals and measurable objectives. Another DHS official said that the plan will include input from the leaders of the Science and Technology Directorate’s functional areas, one of which is transportation. DHS has indicated that the Science and Technology Directorate’s strategic planning process includes (1) determining strategic goals for the next 5 years, threats, and vulnerabilities and (2) developing a list of prioritized projects for fiscal years 2005 through 2010. In a May 2004 report on DHS’s use of the DOE national laboratories for research on technologies for detecting and responding to nuclear, biological, and chemical threats, we recommended that DHS complete a strategic plan for R&D. Until TSA and DHS prepare R&D strategic plans with goals and measurable objectives, Congress and other stakeholders do not have a reliable means of assessing TSA’s and DHS’s progress toward achieving their R&D goals. The Aviation and Transportation Security Act requires TSA to use risk management principles in making R&D funding decisions. The Homeland Security Act requires DHS to establish R&D priorities for detecting, preventing, protecting against, and responding to terrorist attacks and to prepare comprehensive assessments of the vulnerabilities of the nation’s key resources and critical infrastructure sectors, one of which is transportation.In addition, under the Homeland Security Act, DHS’s Information Analysis and Infrastructure Protection Directorate is responsible for receiving and analyzing information from multiple sources, including local, state, and federal government agencies and private sector entities, and integrating the information, analyses, and vulnerability assessments to identify protective priorities. We have consistently advocated using a risk management approach in responding to national security and terrorism challenges. In the context of homeland security, risk management is a systematic and analytical process of (1) considering the likelihood that a terrorist threat will endanger an asset, individual, or function and (2) reducing the risk and mitigating the consequences of an attack. In our work on homeland security issues, we have identified threat, vulnerability, and criticality assessments as key elements of a risk management approach. These elements are defined as follows: A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. This assessment represents a systematic approach to identifying potential threats before they materialize and is based on threat information gathered from both the intelligence and the law enforcement communities. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. A criticality assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy. The assessment provides a basis for identifying which structures or processes are relatively more important to protect from attack. To select and prioritize their R&D projects, TSA and DHS have established processes that include risk management principles. According to TSA officials, TSA has completed threat assessments for all modes of transportation but has yet to complete vulnerability and criticality assessments. A DHS official told us that the department has started to conduct risk assessments of critical infrastructure sectors but does not plan to start its assessment of the transportation sector until 2005. Without complete risk assessments, Congress and other stakeholders are limited in their ability to assess whether the millions of dollars that are being invested in transportation security R&D projects are being spent cost-effectively and to address the highest transportation security risks. In the absence of completed risk assessments, TSA and DHS officials are using available threat intelligence, expert judgment, congressional mandates, mission needs, and information about past terrorist incidents to select and prioritize their R&D projects. TSA and DHS officials said that they obtain threat intelligence from the government’s intelligence community to help make R&D decisions. TSA officials said that TSA’s Chief Technology Officer receives daily intelligence briefings, and that the agency is using threat information to select R&D projects but is not yet using formal threat assessments to make those R&D decisions. In addition, DHS’s Inspector General reported in March 2004 that although many Science and Technology officials agreed on the importance of maintaining a relationship with the Information Analysis and Infrastructure Protection Directorate, staff below them were not actively involved in obtaining terrorist threat information from this directorate and using the information to help select new homeland security technologies. In May 2004, TSA prepared terrorist threat assessments for all modes of transportation. In addition, in June 2004, a TSA official said that TSA is in the process of preparing vulnerability and criticality assessments for all modes of transportation. For example, in 2003, TSA supported the government’s strategy to reduce the threat that shoulder-fired missiles pose to commercial aircraft by conducting vulnerability assessments at all major airports to identify major launch sites around the airports using information from local agencies and FAA. In addition to these assessments, officials in DHS’s Information Analysis and Infrastructure Protection Directorate said they were working in a pilot phase toward preparing national comparative risk assessments with critical vulnerabilities that would allow comparisons to be made across different infrastructure sectors, such as transportation. The officials said the pilot program would focus on other infrastructure sectors, such as chemical and nuclear plants, before addressing the transportation sector, which they expected to work on in fiscal year 2005. However, they did not know when risk assessments would be completed for all modes of transportation. TSA has agreed with a recommendation in our past work that it should apply a risk management approach to strengthen security in aviation and in other modes of transportation. TSA indicated that it is developing four tools, including software, that will help assess threats, criticalities, and vulnerabilities, and that it plans to create risk assessment models for all modes of transportation during fiscal year 2004. In its July 2004 report, the National Commission on Terrorist Attacks Upon the United States also pointed out the importance of risk management and recommended that the government identify and evaluate the transportation assets that need to be protected; set risk-based priorities for defending them; select the most practical and cost-effective ways of doing so; and then develop a plan, a budget, and funding to implement the effort. The plan should assign roles and missions to the relevant federal, state, and local authorities and to private stakeholders. We agree with the commission’s recommendations and are making similar recommendations. R&D best practices identified by the National Research Council indicate that a research program should maintain a complete database of projects to help prioritize and justify program expenditures. Similarly, we have stated that an R&D program should use a management information system that readily provides information to track the performance of projects. TSA’s and DHS’s R&D managers were not able to provide us with complete information on all projects in their R&D portfolios. For example, for the 146 projects that it funded in 2003 and 2004, TSA was not able to provide information on anticipated deployment dates for 91 percent, the current phase of development for 49 percent, and the amounts obligated and budgeted for 8 percent—including 3 TSA projects, CAPPS II, the Transportation Worker Identification Credential, and Operation Safe Commerce, that were appropriated tens of millions of dollars in both fiscal years 2003 and 2004. For the 56 projects that it funded in 2003 and 2004, DHS was not able to provide information on anticipated deployment dates for 68 percent, the current phase of development for 14 percent, and the amounts obligated and budgeted for 9 percent. Although TSA’s and DHS’s databases contain some information, it is scattered among several computer files and paper documents and cannot be easily retrieved or analyzed. Consequently, additional staff time is needed to prepare documents from different reports, and compiling the information could result in errors and omissions. Without accurate, complete, and timely information, TSA and DHS managers are limited in their ability to effectively monitor their R&D programs and ensure that R&D funds are being used to address the highest priority transportation security risks. In commenting on a draft of this report, DHS said that TSA had recently developed a database that will allow it to track milestones, funding, and deployment information for individual projects. The Aviation and Transportation Security Act and the Homeland Security Act require DHS to coordinate its R&D efforts with those of other government agencies. Similarly, R&D best practices indicate that R&D organizations should coordinate to help fill research gaps and leverage resources. In addition, R&D best practices indicate that TSA and DHS should reach out to stakeholders, such as the transportation industry, to identify their security R&D needs. However, TSA’s and DHS’s efforts to coordinate with other federal agencies on transportation security R&D and reach out to transportation industry associations on the industry’s security R&D needs have been limited. The Homeland Security Act requires DHS to coordinate with other executive agencies in developing and carrying out the Science and Technology Directorate’s agenda to reduce duplication and identify unmet needs. In addition, the Aviation and Transportation Security Act gives TSA responsibility for coordinating terrorism countermeasures with “departments, agencies, and instrumentalities of the United States Government.” For TSA and DHS to select the best technologies to enhance transportation security, it is important that they have a clear understanding of the R&D projects currently being conducted, both internally and externally. TSA and DHS have coordinated with each other on some of their transportation security R&D programs, such as efforts to counter the threat posed to commercial aircraft by MANPADS; develop technologies for detecting chemical, biological, radiological, and nuclear programs; and develop explosives detection systems. However, TSA and DHS did not coordinate their R&D portfolios in fiscal year 2003. A DHS official said that the department reviewed TSA’s fiscal year 2004 R&D portfolio. The official said that it was not DHS’s intention to change TSA’s R&D portfolio but to learn what TSA was doing and to leverage resources. R&D best practices also emphasize the importance of coordinating R&D in the transportation security field. A 2002 Transportation Research Board study on the role of science and technology in transportation concluded that while TSA should have its own analysis and research capability, it should also have the ability to draw on the “rich and varied R&D capabilities within the transportation sector, as well as those of the federal government and the science and technology community at large.” Furthermore, the report said that if TSA views the R&D activities of DOT’s modal agencies from a broader systems perspective, it can help fill research gaps, monitor the progress of these activities, and observe where additional investments might yield large benefits. A member of our transportation security and technology panel suggested that TSA and DHS could be more effective if they systematized and formalized their R&D coordination efforts at the highest levels and included other organizations, such as DOT and the Transportation Research Board of the National Research Council. Coordination is limited between TSA and DOT and between DHS and DOT, which continues to conduct some transportation security R&D. Although DOT modal administration officials said that limited communication was occurring between DOT and TSA and between DOT and DHS about ongoing DOT R&D projects, none of these officials said that TSA or DHS had provided any input about which R&D projects they should conduct or had asked the modal administrations for input on which transportation security R&D projects TSA and DHS should conduct. An official from one modal administration said that TSA should consult DOT agencies about their R&D plans because, in some cases, they have expertise about the various transportation modes and are more aware than TSA of the R&D needs and concerns of the transportation industry. For example, a Federal Highway Administration (FHWA) R&D official told us that FHWA has conducted extensive research on tracking freight movement and has mapped out the movement of freight across transportation modes. This official said these efforts could help improve freight security. Other DOT R&D officials expressed similar views about their R&D programs and said they need to coordinate their security R&D programs with TSA and DHS to leverage resources and knowledge and to avoid duplication. An official from one DOT modal administration (the Federal Railroad Administration) said that although TSA and DHS had no formal input into the agency’s R&D plans, all of the security-related R&D projects it had conducted since 2001 were at the request of TSA or DHS. DOT R&D officials also said that the DOT modal administrations should continue to conduct some security R&D because they have research personnel who are experts in various transportation modes and could help TSA and DHS with their security R&D efforts. Because we found during the course of our review that NASA was also conducting some transportation security R&D, we asked NASA officials about the extent of coordination between NASA and TSA and between NASA and DHS. NASA officials said that they have effective coordination with TSA on the transportation security R&D they conduct. They said that TSA and NASA coordinated on identifying the types of R&D projects that NASA should undertake to best help meet TSA’s needs. NASA officials also said that at DHS's request, NASA provided input to the Science and Technology Directorate during the directorate’s strategic planning process. In addition, NASA officials said that they are working with TSA on a memorandum of agreement for their R&D programs. TSA and DHS officials said that coordination with other agencies and R&D organizations is occurring at the project level and that some coordination is based on personal relationships. In discussing DHS’s coordination with other agencies in July 2004, a DHS official said that DHS relies heavily on the Office of Science and Technology Policy, a component of the Executive Office of the President, to coordinate R&D. He also noted that the department was only a year old, and that as it matured, DHS would know more about the R&D activities of other agencies. In creating DHS, Congress intended that DHS draw on the scientific expertise of the DOE national laboratories, which make up the world’s largest system of laboratories for advanced research in support of national energy and defense needs. The Homeland Security Act requires DHS to establish an Office of National Laboratories to coordinate its R&D with that of DOE’s national laboratories. DHS has established this office, and in February 2003, DHS and DOE entered into an agreement allowing DOE to accept and perform work for DHS on an equal basis with other laboratory work. DHS and TSA are sponsoring transportation security-related R&D at several national laboratories, including Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest, and the Idaho National Engineering and Environmental Laboratory. Overall, laboratory officials told us they have an adequate level of communication and coordination with TSA and DHS about their ongoing R&D projects, but some officials believe TSA and DHS could use the laboratories more as resources for transportation security R&D and would like more information about TSA’s research needs. In a 2001 report, the Transportation Research Board recommended that research be closely connected to its stakeholders, such as transportation providers, to help ensure relevance and program support. According to the report, stakeholders are more likely to use the research results if they are involved in the process from the beginning. However, most transportation industry association officials we interviewed said that TSA and DHS have not reached out to them to obtain information on their security R&D needs. Consequently, the transportation industry’s security R&D needs may not be adequately reflected in TSA’s and DHS’s R&D portfolios. An air cargo association official said that TSA contacted them to participate in an air cargo security working group of the Aviation Security Advisory Committee, a TSA-sponsored advisory group, where they were able to discuss the air cargo industry’s security R&D needs. Some transportation association officials said that TSA and DHS should contact them to obtain input on their research priorities to determine whether the proposed technologies would be useful, avoid duplication of research that they are sponsoring, and leverage resources. Officials from another aviation association commented that, in contrast to their relationship with TSA, they had an effective relationship with FAA. The official noted that information-sharing and communication occurred more frequently with FAA, partly because FAA management recognized the importance of obtaining input from the users of FAA’s services, whereas TSA and DHS have not. An official from a state highway association said that although TSA and DHS officials have participated in transportation research projects that the Transportation Research Board is conducting for the association, TSA and DHS have not directly contacted the association about its security R&D needs. A TSA official said that TSA reaches out to aviation associations and other organizations on R&D but has not formalized this process. The Homeland Security Act authorizes DHS to solicit proposals to address vulnerabilities and award grants, cooperative agreements, and contracts with public or private entities, including businesses, federally funded R&D centers, and universities.TSA and DHS have taken some actions to use this authority, but some potential technology providers believe that more information and communication are needed. One way that TSA and DHS have reached out to the private sector is through their membership in the Technical Support Working Group (TSWG), a joint program of the Departments of State and Defense that identifies, prioritizes, and coordinates interagency R&D requirements to combat terrorism. TSA and DHS have used TSWG to issue broad agency announcements, which request proposals from private and/or public entities for projects that address specific R&D needs. These solicitations have generated substantial numbers of responses. For example, TSWG received more than 3,340 responses to a broad agency announcement that it issued for DHS in May 2003 soliciting proposals for multiple homeland security R&D projects, including a system for screening rail passengers and baggage. A DHS official said that as DHS matures, it intends to rely less on TSWG and more on the Homeland Security Advanced Research Projects Agency (HSARPA), DHS’s external funding arm. TSA and DHS have also reached out to the private sector by linking their Web sites to the Federal Business Opportunities Web site, which informs potential technology providers about opportunities for conducting homeland security R&D projects. In addition, TSA’s Web site invites potential technology providers and others to submit their ideas about innovative security technologies that could contribute to TSA’s work on aircraft hardening, baggage and cargo screening, credentialing, physical security, and electronic surveillance. According to TSA, it has evaluated over 1,000 proposals submitted in response to this invitation. However, representatives of several private companies told us of difficulties they had experienced in trying to communicate with TSA, navigate its Web site, obtain information about its R&D program, and understand its current transportation security R&D priorities. For example, a company official told us that his company was forced into guessing about TSA’s long-term R&D strategy, and that manufacturers do not want to make a large investment in developing new technologies without knowing whether TSA will embrace those technologies. This company official suggested that TSA should communicate its R&D goals promptly to vendors. Similarly, some private company representatives told us that they did not have sufficient information about DHS’s transportation security R&D priorities and requirements to adequately respond to solicitations. In commenting on a draft of this report, DHS noted that TSA recently established a working group to update and improve the current Web site’s discussion of technology ideas, products, and services to make it more user-friendly and plans to implement the improvements early next year. HSARPA has also conducted various forms of outreach with potential technology providers. In September 2003, for example, it conducted a bidders’ conference to discuss the release of a solicitation on detection systems for biological and chemical countermeasures. In addition, in November 2003, HSARPA conducted a best practices workshop that allowed potential technology providers to comment on how DHS could best keep industry informed about its priorities, make industry aware of agency solicitations, and manage the relationship between industry and the agency. The industry participants also stressed the importance of communication between them and DHS. In addition, some participants suggested that DHS issue early drafts of solicitations to allow industry to gain a better understanding of DHS’s needs. Following the workshop, in January 2004, DHS issued a draft solicitation, for technologies to detect radiological and nuclear materials, for industry comment before issuing the final version. TSA and DHS have used universities to conduct some of their R&D. For example, in June 2004, TSA indicated that it had 24 grants with colleges and universities. In addition, the Homeland Security Act requires DHS to establish university-based centers for homeland security. According to DHS, the centers will conduct multidisciplinary research on homeland security. In November 2003, DHS announced that it had selected the University of Southern California as its first Homeland Security Center of Excellence.DHS will provide $12 million over 3 years for the university to conduct a risk analysis on the economic consequences of terrorist threats and events. The study will address both the targets and means of terrorism, with an emphasis on protecting the nation’s critical infrastructure, such as transportation systems. Under the Aviation and Transportation Security Act, TSA is required to accelerate the research, development, testing, and evaluation of, among other things, explosives detection technology for checked baggage and new screening technology for carry-on items and other items being loaded onto aircraft, including cargo, and for threats carried on persons. The Homeland Security Act requires DHS’s HSARPA to accelerate the prototyping and development of technologies that “would address homeland security vulnerabilities.”Although the Homeland Security Act authorized a $500 million acceleration fund in fiscal year 2003, a DHS official said that no funds were specifically appropriated for that purpose. Both TSA and DHS have taken steps to address congressionally mandated requirements to accelerate security technologies, but they are operating without goals and measurable objectives. As a result, it is difficult to determine what progress the agencies have made toward accelerating R&D projects. Although TSA does not yet have goals and objectives for measuring acceleration, the agency has funded the Phoenix project, among others, to accelerate baggage screening technologies in the near term. For fiscal year 2004, DHS budgeted $75 million for accelerating technologies through its Rapid Prototyping Program. For example, DHS, in coordination with TSWG, issued a broad agency announcement in May 2003 to support the development of technologies that can be rapidly prototyped and deployed to the field. Furthermore, in January 2004, DHS issued a broad agency announcement to rapidly develop detection systems for radiological and nuclear countermeasures. Although the Homeland Security Act requires TSA to remain a distinct entity until at least November 2004, another provision of the Homeland Security Act requires DHS to integrate all of the department’s R&D activities. Until that integration occurs, TSA and other DHS components that conduct transportation security R&D are operating separately. However, DHS has made some efforts to promote R&D coordination within the department, such as holding meetings with the different components to discuss R&D activities and preparing inventories of the DHS components’ R&D capabilities and ongoing projects. DHS officials said they are preparing a plan to meet a directive from the Secretary of Homeland Security to integrate the department’s R&D activities by 2005. The nation’s transportation systems, many of which are open and accessible, are highly vulnerable to terrorist attack. Whether new technologies can be researched, developed, and deployed to reduce the vulnerability of these systems depends largely on how effectively DHS and TSA manage their transportation security R&D programs. The National Research Council has stated that effectively managing federal R&D programs should include consistently funding basic research because it offers opportunities for significant improvements in capabilities. However, project information provided by TSA and DHS did not show that any of the transportation security R&D projects that they funded in fiscal year 2003 and budgeted for in fiscal year 2004 were in the basic research phase. While TSA and DHS recognize the importance of basic research, they are focusing their efforts on the near-term development and deployment of technologies. Although DHS is working toward complying with legal requirements and implementing best practices for managing its R&D program, it is operating without a strategic plan for its R&D program. Furthermore, although TSA and DHS officials have said that they plan to use risk assessments to select and prioritize R&D projects, TSA has not completed vulnerability and criticality assessments, which are key components of risk assessments, for all modes of transportation. In addition, DHS has not yet completed risk assessments of the infrastructure sectors, such as transportation. As a result, Congress does not have reasonable assurance that the hundreds of millions of dollars that are being invested in transportation security R&D are being spent cost-effectively to address the highest priority transportation security risks. In addition, the National Commission on Terrorist Attacks Upon the United States recommended that the government identify and evaluate the transportation assets that need to be protected; set risk-based priorities for defending them; select the most practical and cost-effective ways of doing so; and then develop a plan, a budget, and funding to implement the effort. TSA and DHS also do not have adequate databases to monitor and manage their spending of the hundreds of millions of dollars that Congress has appropriated for R&D. As DHS integrates its R&D programs, including TSA’s, it will be important for the department to have accurate, complete, current, and readily accessible project information that it can use to effectively monitor and manage its R&D portfolios. The limited evidence of coordination between TSA and DHS that we found, as well as between each of these agencies and other agencies such as DOT, does not provide assurance that R&D resources are being leveraged, research gaps are being identified and addressed, and duplication is being avoided. In our June 2003 report on transportation security challenges, we recommended that DHS and DOT use a mechanism such as a memorandum of agreement to clearly delineate their respective roles and responsibilities. DHS and DOT disagreed with this recommendation because they believed that their roles and responsibilities were already clear. However, we continue to believe that DHS’s and DOT’s roles and responsibilities for transportation security, including their respective security R&D programs, should be clarified because the Aviation and Transportation Security Act gives TSA responsibility for securing all modes of transportation but does not eliminate the DOT modal administrations’ existing statutory responsibilities for the security of different modes of transportation. Finally, because most transportation industry associations told us that TSA and DHS have not contacted them about their security R&D needs, the security R&D needs of transportation providers may not have been adequately considered. To support efforts by TSA and DHS to maximize the advantages offered by basic research, help select and prioritize R&D projects, better monitor and manage their R&D portfolios, enhance coordination with one another and with other organizations that conduct transportation security R&D, and improve their outreach to transportation, we are making five recommendations. Specifically, we recommend that the Secretary of Homeland Security and the Assistant Secretary of Homeland Security for the Transportation Security Administration ensure that their transportation security R&D portfolios contain projects in all phases of R&D, including basic research; complete (1) strategic plans containing measurable objectives for TSA’s and DHS’s transportation security R&D programs and (2) risk assessments—threat, vulnerability, and criticality—for all modes of transportation, and use the results of the risk assessments to help select and prioritize R&D projects; develop a database that will provide accurate, complete, current, and readily accessible project information for monitoring and managing their R&D portfolios; develop a process with DOT to coordinate transportation security R&D, such as a memorandum of agreement identifying roles and responsibilities and designating agency liaisons, and share information on the agreed-upon roles and responsibilities with transportation stakeholders; and develop a vehicle to communicate with the transportation industry to ensure that its R&D security needs have been identified and considered. points. DOT also provided comments on the draft report, which we have incorporated into the report as appropriate. DHS generally concurred with the report’s findings and commented that the recommendations are key to a successful R&D program and that the department would continue to evaluate its R&D processes in light of the report’s findings and recommendations. However, DHS believed that the report did not sufficiently recognize recent changes that have taken place, particularly at TSA. According to DHS, TSA has made great strides in defining R&D projects and linking them to mission needs and identified gaps. In response to these and other technical comments that DHS provided, we revised the report as appropriate. DHS also provided additional perspectives on our recommendations: Recommendation: TSA and DHS should ensure that their transportation security R&D portfolios contain projects in all phases of R&D, including basic research. DHS said that TSA’s Transportation Security Laboratory currently conducts basic research and that TSA’s human factors program, Manhattan II project, and air cargo security projects include basic research. However, information provided by TSA in July 2004 in response to our request for data on projects, including their current phase of research, identified no projects in the basic research phase. This information from TSA covered the agency’s R&D work on human factors, Manhattan II, and air cargo security. In addition, a senior TSA official said that the agency needed to do more basic research. In light of this information from TSA, we did not change our recommendation. Recommendation: TSA and DHS should (1) complete strategic plans containing measurable objectives for TSA’s and DHS’s transportation security R&D programs and (2) complete risk assessments for all modes of transportation, and use the results of the risk assessments to help select and prioritize R&D projects. DHS said that in 2004, it finalized its strategic plan, which defined missions and goals for all of the agencies under it, including TSA. DHS also said that the strategic plan being developed by TSA’s Office of Security Technology would include measurable goals and milestones for R&D projects. However, DHS’s strategic plan does not specifically address transportation security R&D and neither TSA nor DHS has completed an R&D strategic plan containing measurable objectives. Therefore, we did not revise this recommendation. Recommendation: TSA and DHS should develop a database that will provide accurate, complete, current, and readily accessible project information for monitoring and managing their R&D portfolios. DHS said that TSA had developed a system to track R&D projects’ goals and milestones, acquisition, funding, testing, and deployment information. While such a project tracking system could address our recommendation, TSA struggled as recently as of August 2004 to provide us with basic information on many of its R&D projects and, in the end, was unable to do so for a significant number. Therefore, we retained this recommendation. Recommendation: TSA should develop a process with DOT to coordinate transportation security R&D, such as a memorandum of agreement identifying roles and responsibilities, and share this information with transportation stakeholders. DHS said that TSA is already working with DOT to avoid duplicative R&D efforts. In addition, DHS said that TSA would assess the benefits associated with a memorandum of agreement with DOT to determine whether one should be initiated. We continue to believe that a memorandum of agreement between TSA and DHS is the proper vehicle for coordinating R&D—not only to avoid duplication, but also to leverage resources and identify unmet needs. Furthermore, DOT concurred with our finding that there is room for significant improvement in coordination between DOT and TSA and between DOT and DHS. DOT also agreed with our recommendation that a memorandum of agreement with DHS is the appropriate vehicle for improving the coordination of transportation security R&D. Recommendation: TSA and DHS should develop a vehicle to communicate with the transportation industry to ensure that their R&D needs have been identified and considered. DHS said that TSA does and will continue to communicate with the transportation industry. Although DHS noted some actions that TSA is taking to reach out to the transportation industry, as we reported, most transportation industry officials we interviewed said that TSA and DHS had not reached out to them to obtain information about their transportation security R&D needs. Therefore, we did not change this recommendation. DHS, this conclusion is contradicted by evidence contained in our report, namely, that the report underscores the difficulties of integrating multiple new agencies missions, resources, and approaches. However, we believe that the report’s evidence of incomplete strategic planning and risk assessment, inadequate information management, and insufficient coordination supports the conclusion. Given that DHS generally concurred with all of the recommendations, which address these issues, and said they were key to a successful R&D program, we believe that implementing them will strengthen TSA’s and DHS’s ability to provide Congress with reasonable assurance that the hundreds of millions of dollars that are being invested in transportation security R&D are being invested cost-effectively to address the highest priority transportation security risks. In its comments on the draft report, DOT said that its efforts to coordinate research planning with DHS and TSA support our finding that there is room for significant improvement. According to DOT, it offers substantial transportation expertise that could provide critical input for identifying and prioritizing the transportation security R&D agenda. DOT also said that it is anxious to work with DHS and TSA to create a mutually beneficial working environment that taps its transportation experience and expertise while the department benefits from DHS’s security expertise. DOT believes that through effective interagency coordination, it could work with DHS and TSA to ensure that important research needs are met in areas such as critical transportation infrastructure protection, as well as in responding to, and recovering from, a terrorist attack on the transportation system. Finally, DOT said that coordinating R&D activities represents an area that could benefit by being included in an annex to an overall memorandum of agreement between DOT and DHS such as we recommended. DOT said it fully supports the completion of a comprehensive memorandum of agreement with DHS and is working to bring one to fruition. available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. Key contributors to this report are listed in appendix V. If you have any questions about this report, please contact me on (202) 512-2834 or at siggerudk@gao.gov. The objectives of this report were to review (1) the transportation security research and development (R&D) projects that the Transportation Security Administration (TSA), the Department of Homeland Security (DHS), and other agencies funded in fiscal year 2003 and have budgeted for in fiscal year 2004; the status of these projects; and the reasonableness of the distribution of these projects by mode and (2) the extent to which TSA and DHS are managing their transportation security R&D programs according to applicable laws and best practices recommended by the National Academy of Sciences and the National Research Council. related technologies were discussed. To help evaluate the reasonableness of the R&D projects that TSA, DHS, and DOT have funded in terms of the modes of transportation and program areas addressed, we convened a meeting of transportation security and technology experts. At our request, the National Research Council selected the experts, who were affiliated with state transportation departments, universities, national laboratories, private industry, and other organizations and were knowledgeable about transportation security technologies. request, the transportation security and technology experts also provided comments on TSA’s and DHS’s management of their R&D programs. We conducted our review in Washington, D.C.; Arlington, Virginia; Atlantic City, New Jersey; Oak Ridge, Tennessee; and Los Alamos, New Mexico, from July 2003 through September 2004 in accordance with generally accepted government auditing standards. According to a TSA official, private industry and universities are researching and developing several new and emerging technologies that are applicable to transportation security, in some cases without any funding from TSA or DHS. The official said that TSA has focused most of its R&D on making improvements to deployed technologies and testing and evaluating near-term technologies. However, the official stated that TSA needs to start devoting more funding to researching and developing long­ term, high-risk, but potentially high-payoff technologies. Examples of new and emerging technologies include the following: Terahertz imaging uses terahertz radiation to create images of concealed objects or to reveal their chemical composition. The rays can be directed at a person or an object from a source, with reflected rays captured by a detection device. The Homeland Security Research Corporation (a private research organization) reports that terahertz imaging will be an excellent tool for screening baggage. Terahertz imaging has been used in the laboratory to detect explosives on people through several layers of clothing. TSA is considering funding the development of this technology for detecting explosives in containerized air cargo. Nuclear resonance fluorescence imaging uses a high-intensity light source to identify the atomic composition of a target object. Nuclear resonance fluorescence imaging has the potential to detect explosives and nuclear materials in baggage, trucks, and cargo containers. According to a TSA official, TSA may fund R&D on this technology in the future. Microsensors are miniature devices that convert information about the environment into an electrical form that can be read by instruments. There are many types of microsensors, some of which have the potential to detect explosives. In fiscal year 2003, TSA funded R&D at two national laboratories and NASA on several different types of microsensors. A TSA official said that several universities are currently doing work on other types of microsensors that have potential to meet TSA’s needs, but that TSA did not fund any of this work in 2004. Automated detection algorithms are computer software that processes data obtained by detection systems and automatically indicates the presence of an explosive or weapon. Although TSA has funded the development of such software for its currently deployed computed tomography explosives detection systems, it has not yet funded the development of such software to process images produced by emerging detection technologies, such as X-ray backscatter and millimeter wave.A TSA official believes that incorporating automated detection algorithms could substantially reduce the operational cost of future detection systems by reducing the need for screeners. According to this official, TSA may fund the development of these algorithms in the future. Raman spectroscopy uses laser light to determine the chemical composition of an object and can be used to screen passengers, carry-on and checked baggage, cargo, and boarding passes for explosives. Nuclear magnetic resonance directs radio waves at an object that has been placed in a magnetic field to determine the presence of explosives. Nuclear magnetic resonance can be used to screen liquids in containers in carry-on and checked baggage for explosives. The following are GAO’s comments on the Department of Homeland Security’s letter dated August 31, 2004. 1. We agree with DHS that aviation security is currently the primary focus of TSA’s R&D projects, and that many aviation projects provide data that are useful for other transportation security programs. Because these topics were discussed in the draft report, we made no change. 2. DHS provided comments on three projects that members of our panel of transportation security experts suggested should be considered for future funding. We added this information to the report. 3. DHS said that the report should indicate that TSA has two advisory committees—the National Academy of Sciences and the Security Advisory Panel—that contain experts from various modes of transportation. We added this information to the report. 4. DHS commented on a project that one of our panelist believed should not be funded (a $30,000 project to test a prototype of a new, handheld ion mobility spectrometry explosives trace detector) because it could be purchased off the shelf. According to DHS, TSA funded this project because the vendor demonstrated a promising technology. We added this comment to our report. 5. We continue to believe that DHS’s and TSA’s R&D strategic plans should contain measurable objectives. Similarly, the National Academy of Science indicated that research programs should be described in strategic and performance plans. Therefore, we made no changes to the report in response to this comment. 6. DHS noted that TSA recently established a working group to update and improve the current Web site that addresses technology ideas, products, and services to make it more user-friendly. TSA plans to implement the improvements early next year. We added this information to the report. In addition to the individuals named above, other key contributors to this report were Dennis Amari, Carol Anderson-Guthrie, Nancy Boardman, Gerald Dillingham, Elizabeth Eisenstadt, David Goldstein, Brandon Haller, Bob Homan, Dave Hooper, Andrew Huddleston, Michael Mgebroff, Claire van der Lee, and Don Watson. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Conducting research and development (R&D) on technologies for detecting, preventing, and mitigating terrorist threats is vital to enhancing the security of the nation's transportation system. Following the September 11, 2001, terrorist attacks, Congress enacted legislation to strengthen homeland security, in part by enhancing R&D. The Transportation Security Administration (TSA) and the Department of Homeland Security (DHS) are the two federal agencies with primary responsibility for transportation security. GAO was asked to assess the transportation security R&D projects that TSA, DHS, and other agencies have funded and assess how TSA and DHS are managing their transportation security R&D programs according to applicable laws and best practices. For fiscal years 2003 and 2004, TSA and DHS funded over 200 R&D projects designed to develop technologies for enhancing security in most modes of transportation. In fiscal year 2003, TSA spent 81 percent of its $21 million transportation security R&D budget for aviation projects, and DHS spent about half of its $26 million for projects related to more than one mode of transportation. In fiscal year 2004, TSA continued to budget most of its $159 million for aviation, and DHS also budgeted most of its $88 million for aviation. According to the National Research Council, federal R&D programs should include some basic research, but TSA and DHS do not appear to be funding any basic research for transportation security. TSA and DHS have not estimated deployment dates for the vast majority of their R&D projects. Other federal agencies, such as the Department of Transportation (DOT) and the National Aeronautics and Space Administration, also funded some transportation security R&D projects. Several members of an expert panel on transportation security and technology that GAO convened believed the distribution of R&D projects by transportation mode was reasonable, while others believed that aviation has been overemphasized at the expense of maritime and land modes. TSA and DHS have made some progress in managing their transportation security R&D programs according to applicable laws and R&D best practices, but neither agency has fully complied with the laws or implemented the best practices. For example, neither agency has prepared a strategic plan for R&D that contains measurable objectives. In addition, although TSA has completed threat assessments for all modes, it has not completed vulnerability and criticality assessments. DHS also has not completed risk assessments of the infrastructure sectors. Furthermore, both TSA and DHS lack complete, consolidated data for managing their R&D projects. Finally, although TSA and DHS have made some efforts to coordinate R&D with other federal agencies, their outreach to consider the concerns of the transportation industry has been limited.